You are on page 1of 8

Volume 9, Issue 4, April – 2024 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24APR744

The use of TensorFlow Action Recognition as the


Main Component in Making a Sign Language
Translator Speaker for Speech-Impaired People
Louis Zendrix C. Adornado1; Daniella Kite V. Latorre2; Aldus Irving B. Serrano3; Mohammad Elyjah K. Masukat4
Lawrence Kristopher A. Lontoc5; Julie Ann B. Real6
Philippine School Doha
Doha City, Qatar

Abstract:- Due to communication barriers, deaf and 3,369 people were diagnosed with communication
mute students are separated from their friends, families disabilities, while 4,640 were found to have a hearing
and communities as their schools do not offer sign impairment. Hearing-impaired and speech-impaired
language instruction. Consequently, this cluster of individuals represent two distinct groups with unique needs
people may feel excluded from their communities, and troubles.
depriving them the chance of living a normal life that is
free from discrimination. The objective of this Living with a hearing or speech impairment can be a
quantitative experimental study is to use TensorFlow desolating experience. It can lead to a lack of educational
Action Recognition as the main component in making a and job opportunities, social withdrawal, and emotional
Sign Language Translator Speaker for Speech-Impaired problems. The Human Rights Watch Council (2022)
People. Based on the results, the device can successfully emphasized that people who are deaf or hard of hearing
translate sign languages with an average of 5.91 seconds, often become excluded from their communities due to
and translate three signs per 30 seconds. Also, it was communication barriers as not many are proficient in sign
found that it can detect distances up to four meters. The language and that deaf students are separated from their
study manifested that the device provides the service of families and communities because their school does not
breaking past the communication barriers to the speech- offer sign language instructions due to inadequate learning
impaired and hearing-impaired individuals, which resources and limited awareness about the importance of
advocates and facilitates effective communication while sign language in society. Inadequate support has
fostering inclusivity. These results affirmed that it is significantly affected the independence and quality of life of
feasible to make a Sign Language Translator Speaker individuals with disabilities (Pearson et al., 2022). In a
with the use of TensorFlow Action Recognition. Thus, sample of 200 cases that documented disability
this Sign Language Speaker device offers the best discrimination lawsuits drawn from the Westlaw legal
services for deaf and mute people Qatar and all around database, each of the cases was coded for gender, job, and
the world, as the struggles of hearing and speech- disability type and analyzed using multinomial logistic
impaired people can be alleviated. models. The results showed that one’s gender and job, as
well as disability type, influenced the discrimination that
Keywords:- Artificial Intelligence, Assistive Technology, one experienced, such as firing, accommodations, hiring,
Sign Language Translator, Speech-Impaired, TensorFlow, and harassment in the workplace (Mosher, 2015). It was also
TensorFlow Action Recognition. found that the employment services to persons with hearing
impairment at a workplace were not adequate to the level of
I. INTRODUCTION training that persons with hearing impairment had (Abbas, et
al., 2019). As the researchers concluded, employers did not
Individuals utilize communication daily to exchange provide equal participation to the hearing-impaired
information and convey their emotions. It is essential when employees when it came to organizational consultation
it comes to creating a connection with others; without a mechanisms; nor were technical and personal support,
connection to others, people may struggle, experience disability management service, and ample financial support
depression, or feel a lack of belonging (Naar, 2021). provided to persons with hearing impairment. The
Communication, however, requires a common language for discrimination and stigma faced by those with hearing loss
both parties to understand each other—a flaw for those or speech impairments can make it even harder for these
diagnosed as deaf or mute. More than 1.5 billion people, or people to connect with others and live independently.
almost 20% of the global population, live with hearing loss Moreover, the health needs of individuals with hearing or
(World Health Organization, 2020). In the Philippines, speech impairments are unmet by the health industry, as
nearly 1 in 6 people have serious health problems, in which evident in the communication barriers between healthcare
15% of the population have been diagnosed with moderate professionals and patients (Kuenburg et al., 2015).
hearing loss (Newall et al., 2021). Meanwhile in Qatar, the
Planning and Statistics Authority (2022) found that in 2020,

IJISRT24APR744 www.ijisrt.com 1203


Volume 9, Issue 4, April – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24APR744

TensorFlow Action Recognition is utilized in the despite these challenges, technology has the power to make
development of a Sign Language Translator Speaker. For a difference in the lives of those with disabilities.
instance, in the study by Hou et al. (2019), the authors
addressed the challenge of the lack of a comprehensive sign An approach has been designed for hand image
language dataset by creating their dataset. They collected recognition inherent in sign language translation systems.
data from gyroscope sensors to capture angular velocity and This study and model of Pandey and Jain (2015) emphasized
utilized accelerometer data, which combined linear the origin and comparison of features from 2D hand images
acceleration and gravity information. To separate stored in a database. Practicality in real-world situations and
acceleration data from gravity, they employed the Android ease of implementation, compared to more complex 3-D
system's application programming interface (API) and models, benefits from techniques like skin color region
applied a Kalman filter. Volunteers wore smartwatches detection and segmentation for hand identification—while
equipped with sensors on their right wrists to collect data being susceptible to lighting changes and background
from gyroscopes, accelerometers, and linear accelerometers. interference. Scientists continue to investigate this topic by
The dataset specifically focused on fingerspelling, involving implementing Adaboost and feature diversity to boost
26 alphabet signs performed by five volunteers, each sign accuracy in hand-part segmentation and reduce occlusion
repeated 30 times. This dataset division allocated 75% for limitations. A study of great significance sought the
training purposes, leaving the remaining portion for importance of efficient hand image recognition techniques
evaluation, providing valuable insights into the development and their relevance to the advancement of sign language
of a Sign Language Translator Speaker using TensorFlow translation systems.
Action Recognition (Hou et al., 2019).
From artificial throats to cochlear implants and bone
Regardless of differences, hearing and speech-impaired conduction aids, varying tools are available to help those
people have similar salient characteristics: reduced language with hearing or speech impairments communicate more
acquisition ability and verbal communication skills, easily. Unfortunately, many of these technologies are
resulting in limitations in social communication (Aras et al. prohibitively expensive, making them inaccessible to those
2014; Real et al. 2021),making it harder for these people to who need them most. Hearing aids range from $1,000 to
resort to sign language, a gesture-centered language, to $5,000, while an FM system costs from $150 to several
communicate. Unfortunately only few are fluent in this thousand dollars. Most of these are sold for a single unit,
language, most of whom are deaf. The hearing people, with making it the most impractical solution, and without access
whom they frequently engage, are still not literate and to sufficient or cheap assistive technology, a person would
almost inept in sign language. Fundamentally, it would be incur a few additional expenses and unfulfilled demands
equivalent to conversing with someone with little to no (Mitra, et al., 2017). Consumers typically pay for aids and
understanding of another's language. Additionally, fittings out of pocket because Medicare and most insurance
cognition, a person's conscious intellect, is significantly plans do not cover them. ARHL affects both ears, and a pair
lower in individuals with untreated hearing. As Taljaard, et of aids typically cost around $6000, which exceeds many
al. (2016), found the degree of cognitive deficiency is seniors' price ranges. Cost was a common reason given by
significantly associated with the degree of both untreated participants in a recent population-based prospective study
and treated hearing impairment. Furthermore, deaf for not purchasing hearing aids (Blustein & Weinstein,
American Sign Language users are isolated from mass 2016). Therefore, it becomes critical that our society
media and healthcare messages and communication— continues to develop the most efficient means to innovate
which, when coupled with social marginalization, places novel strategies with reasonable costs, delivering the same
these people at a high risk of inadequate health literacy level of convenience and experience as one would to a non-
(McKee et al., 2016), creating a communication barrier that disabled person.
hearing or speech-impaired people face daily.
This Sign Language Translator Speaker is a cost-
With the stigma of needing to learn sign language for effective solution that utilizes TensorFlow, which is an
common inclusivity for people, technology as one knows it open-source platform that simplifies and speeds up machine
must continue to deliver solutions that do not require the learning tasks. Specifically, TensorFlow Action Recognition
worldwide effort of learning American Sign Language. enhances the precision of tracking sign language movements
While considerably more practical, creating such a gadget by generating prior maps that can identify variations in the
could provide a transitory answer. Although the world is image sequence caused by illumination (Shakeri & Zhang,
already taking steps towards inclusion and accessibility, the 2019). Furthermore, by applying the Neural Translation
majority of the fundamental services, industries, and Machine to estimate the likelihood of a succession of words,
essentially, day-to-day life amenities are still immensely generally modeling complete sentences in a single integrated
inaccessible to impaired people. Furthermore, it takes model, the Sign Language Translator Speaker can provide
knowledge of varying disciplines, including computer increased translation and linguistic accuracy (Kalchbrenner
vision, computer graphics, natural language processing, & Blunsom, 2013), while still, becoming a highly accessible
human-computer interaction, linguistics, and Deaf culture, and cost-effective device that organizations such as schools,
to create effective sign language recognition, generation, companies, and the sort can produce with minimal effort. In
and translation systems (Bragg et al., 2019). However, addition, Abadi, et al. (2016) stated that TensorFlow is a
machine learning system that works in diverse environments

IJISRT24APR744 www.ijisrt.com 1204


Volume 9, Issue 4, April – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24APR744

and at scale. Dataflow graphs are used by Tensor-Flow to Action Recognition Program and the Disused Speaker were
represent computation, shared state, and the operations that the independent variables, and the Sign Language Translator
modify it. TensorFlow supports a wide range of Speaker was the dependent variable. Furthermore, this
applications, with a focus on deep neural network training experiment quantitatively ensured the accuracy of data and
and inference. TensorFlow has been widely adopted for the availability of the said data to answer the research
machine learning research and is used in several Google questions efficiently. Moreover, this method was vital
services. because it provided control over the variables that
demonstrated an outcome and was advantageous in finding
After recognizing TensorFlow as the study’s Artificial accurate results.
Intelligence (AI) platform, the words and phrases utilized to
test the effectivity of the Sign Language Translator Speaker A. Research Locale
came from the First 100 Spoken Collocations; which This research study was conducted in Philippine
according to Clouston (2013), the aforementioned word list School Doha, State of Qatar, specifically in Bldg. 01, St.
ranked the first 100 most frequently-spoken collocations in 1008, Zone 56 Mesaimeer Area.
10 million spoken words in the British National Corpus
(BNC). Some of these words include: (1) ‘you know,’ (2) ‘I B. Data Gathering Procedure
think (that),’ (6) ‘a lot of,’ and (8) thank you—which were The procedure shows the step-by-step process of how
chosen using six criteria, including frequency, word type, to make the TensorFlow Action Recognition Sign Language
and so on. Clouston further emphasized that the survey Translation Speaker.
article dwelled on the literature of word lists for vocabulary
to teach English as a second or foreign language to students.  Ensuring Protection and Maintaining Safety
Wear personal protective equipment such as safety
With the innovation of the industry, it can utilize a goggles, safety gloves, safety shoes, and a laboratory coat
more practical and cost-effective method to become more while performing the procedure for the making of the Sign
accessible, remaining to offer the best services they could Language Translator Speaker to avoid hazardous conditions.
render possible. Hearing and speech-impaired people can
have their struggles alleviated and receive the assistance that  Programming the Sign Language Detector
would improve their lives little by little without having to
make costly sacrifices. Moreover, future researchers can  Install TensorFlow, TensorFlow-gpu, opencv-python,
utilize this study as it can provide references, findings, data, mediapipe, sklearn, matplotlib libraries. Import cv2,
and materials for researchers conducting studies on sign numpy, os, pyplot, time, and mediapipe dependencies.
language equipment. Also, this research can assess the  Access mediapipe model to read the frames, make
validity of other relevant studies, reducing errors to produce detections and draw landmarks to render to the screen.
higher-quality products. For the betterment of humankind, Draw face, pose, left and right hand detections.
little by little, through communication. As the saying goes,  Get xyz values of landmarks and concatenate in an array.
"communication is key." As in our lives, it is the foundation Extract keypoints.
upon which trust, respect, and understanding is built.  Create paths for exported data, then create variables for
actions to detect, lastly create folders for each action and
 Research Questions inside are other folders for sequences.
The objective of this study is to make a Sign Language  Copy mediapipe loop from step two, then add code to
Translator Speaker with the use of a Tensorflow Action loop through the actions, sequences, and video length,
Recognition. Specifically, it answers the following then apply collection logic, export keypoints, lastly
questions: collect frames for each action.
 Import train_test_split and to_categorical dependencies,
 What is the time interval between the gesture and the then create a label map.
decoded translation on the Sign Language Translator  Build the neural network and compile the model lastly,
Speaker in terms of seconds? train.
 How many signs can the Sign Language Translator  Store results inside a variable. Write code that np.argmax
Speaker translate in a 30-second full statement? will pass through the first values from the results array
 How far can the Sign Language Translator Speaker’s then passed into the actions array.
camera recognize gestures in terms of meters?  Save Model Weights.
 Import metrics from scikit learn to evaluate the
II. METHODOLOGY performance of the model, then make predictions. Run
multi-label confusion matrix, Lastly pass through the
This study utilized the experimental design of research; accuracy score method.
the process of carrying out research in an objective and
controlled fashion so that precision was maximized and
specific conclusions could be drawn regarding a hypothesis
statement (Bell, 2010). This research design also explained
and evaluated research, but did not generally apply to
exploratory or descriptive. In this study, the TensorFlow

IJISRT24APR744 www.ijisrt.com 1205


Volume 9, Issue 4, April – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24APR744

 Programming the Grammar Translator (Neural Machine  Add it through on the drivers list if the device is not
Translation) listed.
 Activate the auto-play feature and include it in the
 Install TensorFlow, TensorFlow-gpu, sklearn, matplotlib translator program.
libraries.
 Import einops, numpy, the sign language dataset, and III. RESULTS
pyplot.
 Add a start and end token to each sentence. This study aimed to create a Sign Language Translator
 Clean the sentences by removing special characters. Speaker using TensorFlow Action Recognition. The
 Create a word index and reverse word index. following section focuses on the results and interpretation of
 Pad each sentence to a maximum length. data that was collected during the testing phase of the
 Create a tf.data dataset. product where it fulfilled the three main research questions
 Begin text preprocessing by unicoding normalization to such as the time interval between the gesture and the
split accented characters and replace compatibility decoded translation, the number of signs the Sign Language
characters with their ASCII equivalents. Translator Speaker translated in a 30-second full statement,
 Vectorize the text with the preprocessing model layers and the distance the Sign Language Translator Speaker’s
for standardization to handle vocabulary extraction to camera recognize gestures in terms of meters. In attaining
token sequences. the results that exhibited the efficiency of the sign language
translation speaker.
 Process the dataset.
 Encode the tokens.
A. The Time Interval between the Gesture and the Decoded
 Decode the predictions. Translation on the Sign Language Translator Speaker in
 Combine the model components. Terms of Seconds
 Build the training model.
 Implement a masked loss and accuracy function. Table 1 The Time Interval between the Gesture and the
 Configure the model for further training. Decoded Translation
 Execute the text to text translations.
 Input the translation to the audio transmitter.

 Programming the Audio Transmitter

 Toggle the auto-play setting for newly exported mp3


files.
 Install the dependencies IBM Watson through the PIP
installcommand.
 Set up a TTS service with AI machine learning.
 Get the service URL and API key and pass them
through.
 Import the IAM authenticator to begin server
authentication.
 Convert a string or body of text to output a text file.
 Write out the inputted text file.
 Synthesize the converted output speech.
 Pass keyword parameters.
 Choose the language model and ensure the output as an
mp3 file
 Strip out blank spaces and ensure that they are
concatenated together.
 Start auto playing the output from the exported audio
file.

 Connecting the Computer to the Sign Language Table 1 shows the time interval between the gesture
Translator Speaker and the decoded translation on the Sign Language Translator
Speaker in terms of seconds. For reliable results, a total of
 Connect the Sign Language Translator Speaker’s audio three trials were conducted and the average was taken by
jack plug to the Computer’s audio jack. dividing the summation of three trials of time by three. In
 Open Device Manager In the Computer’s settings. the “hello” sign language the first trial had a time interval of
 Ensure the device is listed on the speaker and audio 14.3 seconds which was the least out of the three trials, the
drivers. second trial had a time interval of 18.67 seconds making it
have the longest time interval out of the three trials, the third

IJISRT24APR744 www.ijisrt.com 1206


Volume 9, Issue 4, April – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24APR744

trial had a time interval of 18.40 seconds. In the “thank Table 2 shows the amount of signs that the Sign
you” sign language the first trial had a time interval of 5.67 Language Translator Speaker translated in a 30-second full
seconds which was the least out of the three trials, the statement. Five trials were carried out in total and the
second trial had a time interval of 23.41 seconds making it average was calculated by dividing the sum of the five trials
have the longest time interval out of the three trials, the third by five. This was to ensure accurate and consistent results.
trial had a time interval of 5.49 seconds. In the “how are In the first trial, it can be seen that there were two (2) signs
you” sign language the first trial had a time interval of 6 it could translate.. In the second, third, and fourth trial, it can
seconds, the second trial had a time interval of 11.16 be seen that the average signs it could translate was three
seconds making it have the longest time interval out of the (3). This means that this was the most amount of signs it
three trials, the third trial had a time interval of 5.53 seconds could translate. Lastly, in the fifth trial, it can be seen that
making it have the least time interval out of the three trials. there were two (2) signs it could translate.

After the assessment, the results denote that the device Combining all the results of the five trials and dividing
was relatively quick in translating a given gesture, ranging it by five, the total average reached is 2.6 signs per 30-
from 6 to 18 seconds. This outcome implies that the Sign seconds. Therefore, the number of signs that the Sign
Language Translator speaker can, in fact, be used in real- Language Translator Speaker can translate in a 30-second
time scenarios as it can promptly provide a translation based full statement is 2.6 signs per 30 seconds.
on gestures utilized in a conversation. This is further backed
up by a research where the model, called the real-time sign The results indicate that the TensorFlow Action
language translator, was designed to focus on the Indian Recognition program is able to catch on and comprehend
Sign Language (ISL) (Sinha et al., 2022). This sign lengthy statements, thus translating accurately. This assures
language consisted of alphabets from A to Z and digits from that the device is an effective tool of communication as it
1 to 9, which counted to 35 signs in total. The research had can recognize the correct meaning to a given gesture. This
an accuracy of 50-80%, but obtained a loss of 0.227 on the was also evident in a research that was based on a real-time
trained model. smartwatch-based sign language translator (Hou et al.,
Another one is the study of Kau et al. (2017), which 2019), which is an energy-conserving device that provides
proposed a wireless hand gesture recognition glove for real-time sign language translating services for the
Taiwanese Sign Language. The device used flex and inertial American sign language recognition (ASLR) system. It is a
sensors to discriminate different hand gestures. The finger device that has an average translation time of approximately
flexion, the palm orientation, and the motion trajectory were 1.1 seconds for a sentence with eleven words. Another
the input signals for the system. This led to an accuracy rate research was conducted where systems-based sensory
of up to 94% on sensitivity for gesture recognition. gloves for sign language recognition also showed success on
comprehending lengthy statements where forty sentences
B. The Number of Signs the Sign Language Translator made up the dataset, which was recorded with two DG5-
Speaker Translated in a 30-Second Full Statement VHand gloves (Ahmed et al., 2018). The suggested
solution's performance achieved 98.9% recognition
Table 2 The Number of Signs the Sign Language Translator accuracy.
Speaker Translated in a 30-Second Full Statement
C. The Distance the Sign Language Translator Speaker’s
Camera Recognize Gestures in Terms of Meters

Table 3 The Distance the Sign Language Translator


Speaker’s Camera Recognized Gestures

IJISRT24APR744 www.ijisrt.com 1207


Volume 9, Issue 4, April – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24APR744

Table 3 illustrates the Sign Language Translator successfully recognize and correctly translate sign gestures
Speaker’s ability to recognize a gesture from a distance. At from a distance of 1 meter until 4 meters. All these results
one to four meters, the speaker was able to recognize the affirm the hypothesis that it is feasible to make an effective
correct sign language and translate accordingly. However, at Sign Language Translator Speaker with the use of a
five meters, the device translated “I’m fine” instead of the TensorFlow Action Recognition.
gesture, “hello;” thus, no longer being able to interpret sign
language accurately. The integration of TensorFlow into the Sign Language
translator speakers has led to the result of significant
This shows the Sign Language Translator Speaker’s improvement in terms of the accuracy and efficiency of sign
capability to be utilized in conversations up close or from a language translations. After equipping the machine model
distance and further proves its effectiveness in various and training it with a vast array of sign language motions,
scenarios, such as seminars or online meetings, where a the model became capable of identifying and converting the
person might change their distance from the camera sign language motions into spoken and written languages.
regularly.
This research is capable of bridging the gaps of
Evaluating the results, the speaker works at distances communication for the deaf and mute people. The use of an
one to four meters. The sign language system is less innovative tool such as a Sign Language Translator Speaker
effective for communication over distance (Stokoe, 2005). can contribute to a more inclusive world where
Additionally, It was found that deaf and hard of hearing communication barriers are substantially reduced; overall
participants better understood interpreters at a closer promoting inclusivity and empowering sign language users
distance of 5 feet rather than 15 feet. These studies further to interact more effectively in multiple settings, such as in a
back the Sign Language Translator Speaker’s effectiveness classroom or workplace.
according to distance (Kushalnagar, 2015).
Moreover, future researchers are advised to utilize a
IV. CONCLUSION laptop with a greater graphics card and a camera with a
higher resolution to reduce the lag of the program and
Speech-impairment continues to be a timely issue. improve the time interval between the gesture and decoded
About 70 million people in the world are deaf-mutes. On the translation, the number of signs translated in a 30-second
other hand, 360 million people are deaf—out of these, 32 full statement, and the distance that the device can recognize
million are children. Because of this, the World Health gestures.
Organization (2021) estimated that by 2050, 1 in 4 people
will suffer from some degree of hearing loss. Due to this, a Furthermore, future researchers may also take
demand for a device that can cater to the necessities of the advantage of this study and use it as a reference in creating a
speech and hearing-impaired community is imperative. project that may have a similar output or used similar
materials. Future researchers may incorporate more words
In recent years, there have been many advancements in and phrases to be translated into the program to maximize
the field of technology. Sign Language Translator Speakers the capability of the device. Also, the current researchers
made with a machine learning algorithm called TensorFlow urge to familiarize oneself with American Sign Language or
is one of those that serve as a new and innovative form of consult an expert to have a better understanding of the
assistive technology in the modern world. It provides the technicalities and forms of the language when it is used in a
service of breaking past the communication barriers to the conversation.
speech-impaired and hearing-impaired individuals, which
advocates and facilitates effective communication while Additionally, Qatari and Filipino communities are
fostering inclusivity. Sign Language Translator Speakers encouraged to implement devices such as the Sign Language
allows direct communication between hearing and hearing- Translator Speaker in public places to urge acceptance,
impaired people. This also allows the hearing-impaired accessibility, and inclusivity amongst all people. The
society to not be dependent on human interpreters (Kahlon TensorFlow Action Recognition program is an effective
& Singh, 2021). variable in creating a translator speaker in the way that it is
able to recognize gestures accurately and efficiently.
Based on the results, the time interval between the Furthermore, this device is cost-efficient as it was made
gesture and the decoded translation on the Sign Language from scrap materials, all the while contributing to the
Translator Speaker varied depending on the sign gesture advancement and improvement of society when it comes to
being used. The gesture “hello” took the longest time to the timely problem of language barrier.
translate, with an average of 18.45 seconds while the gesture
“thank you” took the shortest time to translate, with an ACKNOWLEDGMENT
average of 5.67 seconds. The Sign Language Translator
Speaker was able to translate full sentences from sign The researchers wish to express their gratitude and
language gestures to spoken language in approximately 2.6 appreciation to all those who supported and guided them
signs per 30 seconds; 2 signs being the least amount of signs throughout the study, especially to the following:
translated among the five trials. Moreover, the Sign
Language Translator Speaker demonstrated its ability to

IJISRT24APR744 www.ijisrt.com 1208


Volume 9, Issue 4, April – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24APR744

 Dr. Alexander S. Acosta, the principal of the Philippine [6]. Bragg, D., Koller, O., Bellard, M., Larwan, B.,
School Doha for letting the researchers experience Boudreault, P., Braffort, A., Caselli, N., Huenerfauth,
crafting a research paper. M., Kacorri, H., Verhoef, T., Vogler, C., & Morris,
 Dr. Lorina S. Villanueva, the QAAD Vice Principal and M., (2019). Sign Language Recognition, Generation,
Dr. Noemi F. Formaran, the Senior High School Vice and Translation: An Interdisciplinary Perspective.
Principal, for allowing the researchers to conduct their Association for Computing Machinery, 16–31.
study in the Grade 11 level. https://doi.org/10.1145/3308561.3353774
 Dr. Julie Ann B. Real, the researchers’ research adviser, [7]. Blustein, J., & Weinstein, B. E. (2016). Opening the
for guiding, teaching, and advising the researchers Market for Lower Cost Hearing Aids: Regulatory
throughout the study. Change Can Improve the Health of Older Americans.
 Mr. Junrey R. Barde, Mr. Aries L. Paco, Mrs. Maricel T. American Journal of Public Health, 106(6), 1032-
Gubat, Dr. Bobby R. Henerale, and Mr. Lochlan John D. 1035. https://doi.org/10.2105/AJPH.2016.303176.
Villacorte the panelist members for enabling the [8]. Clouston, M., (2013). Word List for Vocabulary
researchers to advance their paper further through Learning and Teaching. Catesol Journal, 24(1), 287-
insightful critiques and feedback. 304.
 Mr. and Mrs. Adornado; Mr. and Mrs. Masukat; Mr. and [9]. Hou, J.,Li, X., Wang,P.,Wang,Y., Qian,J.,& Yang P.
Mrs. Serrano; Mr. and Mrs. Latorre; Mr. and Mrs. (2019). SignSpeaker: A Real-time, High-Precision
Lontoc, for supporting and motivating the researchers to SmartWatch-based Sign Language Translator.
finish the paper. https://dl.acm.org/doi/pdf/10.1145/3300061.3300117
 And mainly, our Almighty God for giving the [10]. Kahlon, N., & Singh, W. (2021). Machine translation
researchers the strength and motivation all throughout from text to sign language: a systematic review,
the study. Universal Access in the Information Society, 22, 1-
35. https://link.springer.com/article/10.1007/s10209-
REFERENCES 021-00823-1
[11]. Kalchbrenner, N., & Blunsom, P., (2013). Recurrent
[1]. Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Continuous Translation Models, Association for
Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, Computational Linguistics, 1700–1709.
M., Kudlur, M., Levenburg, J., Monga, R., Moore, [12]. Kau, L., Su, W., Yu, P., & Wei, S. (2015) A real-
S., Murray, D. G., Steiner, B., Tucker, P., time portable sign language translation system.
Vasudevan, V., Warden, P., . . . Google Brain International Midwest Symposium on Circuits and
(2016). TensorFlow: A System for Large-Scale Systems (MWSCAS), Fort Collins, CO, USA, pp. 1-
Machine Learning. USENIX The Advanced 4, https://doi.org/10.1109/MWSCAS.2015.7282137
Computing Association. https://www.usenix.org/ [13]. Kuenburg, A., Fellinger,P., & Fellinger, J. (2016).
conference/osdi16/technical-sessions/presentation/ Health Care Access Among Deaf People. The
abadi Journal of Deaf Studies and Deaf Education, Volume
[2]. Abbas, F., Anis, F., & Ayaz, M. (2019). Employment 21, Issue 1, January 2016, Pages 1–10,
Barriers for Persons with Hearing Impairment in the https://doi.org/10.1093/deafed/env042
Job Market: Employers’ Perspective. Global Social [14]. Kushalnagar, R. S. (2015). Optimal viewing distance
Sciences Review, 4(3), 421-432. between deaf viewers and interpreters.
https://dx.doi.org/10.31703/gssr.2019(IV-III). https://scholarworks.csun.edu/bitstream/handle/1021
[3]. Ahmed, M. A., Zaidan, B .B., Zaidan, A. A., Salih, 1.3/151199/JTPD-2015-p246.pdf
M. M., & Lakulu, M. M. b., (2018). A Review on [15]. McKee, M., Paasche-Orlow, M., Winters, P.,
Systems-Based Sensory Gloves for Sign Language Fiscella, K., Zazove, P., Sen, A., & Pearson, T.,
Recognition State of the Art between 2007 and 2017. (2015). Assessing Health Literacy in Deaf American
Sensors 2018, 18, 2208. https://doi.org/10.3390/s Sign Language Users. Journal of Health
18072208 Communication, 20(sup2), 92-100. https://doi.org/
[4]. Aras, I., Stevanović, R., Vlahović, S., Stevanović, S., 10.1080/10810730.2015.1066468.
Kolarić, B., & Kondić, L. (2014). Health related [16]. Mitra, S., Palmer, M., Kim, H., Mont, D., & Grace,
quality of life in parents of children with speech and N. (2017). Extra costs of living with a disability: A
hearing impairment. International Journal of systematized review and agenda for research.
Pediatric Otorhinolaryngology, 78(2), 323–329. Disability and Health Journal,10(4), 475-484.
https://doi.org/10.1016/j.ijporl.2013.12.001 ScienceDirect. https://doi.org/10.1016/j.dhjo.2017.
[5]. Bell, S., (2010). In R. Kitchin & N. Thrift (Eds.), 04.007.
International Encyclopedia of Human Geography [17]. Mosher, J., (2015). Bodies in Contempt: Gender,
(pp. 672 - 675). Elsevier Science. Class and Disability Intersections in Workplace
https://doi.org/10.1016/B978- 008044910-4.00431-4 Discrimination Claims. Disability Studies Quarterly,
35(3). https://doi.org/10.18061/dsq.v35i3.4928.

IJISRT24APR744 www.ijisrt.com 1209


Volume 9, Issue 4, April – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24APR744

[18]. Naar, D. (2021, July 14). The Impact & Importance [28]. Sinha, K., Miranda, A.O., Mishra, S. (2022). Real-
of Communication in Society. Time Sign Language Translator. In: Mallick, P.K.,
https://www.reference.com/world-view/ Bhoi, A.K., Barsocchi, P., de Albuquerque, V.H.C.
communication-affect-society- a8db95ef3db8af34 (eds) Cognitive Informatics and Soft Computing.
[19]. Newall, J., Martinez, N., Swanepoel, D., & Lecture Notes in Networks and Systems, vol 375.
McMahon, C. A National Survey of Hearing Loss in Springer, Singapore. https://doi.org/10.1007/978-
the Philippines. Asia Pacific Journal of Public 981-16-8763-1_39
Health, 2020;32(5):235-241. doi:10.1177/10105395 [29]. Stokoe, W. (2005). Sign Language Structure: An
20937086 Outline of the Visual Communication Systems of the
[20]. Pandey, P., & Jain, V. (2015). Hand Gesture American Deaf, The Journal of Deaf Studies and
Recognition for Sign Language Recognition: A Deaf Education, Volume 10, Issue 1, Pages 3–37,
review. International Journal of Science, Engineering https://doi.org/10.1093/deafed/eni001
and Technology Research, 4(3), 466. Retrieved from [30]. Taljaard, D.S., Olathe, M., Brennan-Jones C.G., &
https://citeseerx.ist.psu.edu/document?repid=rep1&ty Eikelboom R.H., (2016). The relationship between
pe=pdf&doi=f68edad569a8fe5a842c8ba62c9a9689aa hearing impairment and cognitive function: a meta-
7041ac analysis in adults, 41(6), 718-729.
[21]. Pearson, C., Watson, N., Brunner, R., Cullingworth, https://doi.org/10.1111/coa.12607
J., Hameed, S., Scherer, N., & Shakespeare, T. [31]. The Human Rights Watch Council. (2022). For the
(2022). Covid-19 and the Crisis in Social Care: Deaf Community, Sign Language Equals Rights.
Exploring the Experiences of Disabled People in the https://www.hrw.org/news/2022/09/23/deaf-
Pandemic. Social Policy and Society, 1-16. community-sign- language-equals-rights
doi:10.1017/S1474746422000112 [32]. World Health Organization. (2020). Deafness and
[22]. Planning and Statistics Authority. (2022). Chapter IX hearing loss. https://www.who.int/health-
Disabilities. https://www.psa.gov.qa/en/statistics/ topics/hearing-loss#tab=tab
Statistical%20Releases/Social/SpecialNeeds/2022/9_
Disabilities_2022_AE.pdf
[23]. Real, J. A. B., Manaois, R. A. N.,Bambalan, J., Awit,
T., Cruz, B., Sagayadoro, A. & Venus, M. (2023).
The Making of a Contactless Sanitation System out
of Arduino Interface and Ion Generators.
International Journal of Innovative Science and
Research Technology, Volume 8, (2).
https://doi.org/10.5281/zenodo.7655758
[24]. Real, J. A. B., Manaois, R. A. N., & Barbacena, S. L.
B.(2022) The use of Arduino Interface and Lemon
(Citrus Limon) Peels in Making an Improvised Air
Ionizer-Purifier. International Journal of Innovative
Science and Research Technology ,Volume 8 (2).
https://doi.org/10.5281/zenodo.7680092
[25]. Real, J. A., Carandang, M. A. D., Contreras, A. G.
L., & Diokno, P. C. J. (2021). The Perceived Effects
of Using Nonverbal Language to the Online
Communication of the Junior High School Students.
International Journal of Research Publications,
73(1), 12-12. https://doi.org/10.47119/ijrp100731320
211823
[26]. Real, J. A. B., Cruz, M. R. D. D., & Fortes, M. J. E.
The Creation of a Face Mask Detecting Alarm
System with the Use of Raspberry Pi as a
Component. International Journal of New
Technology and Research (IJNTR), Volume 9 (3)
https://doi.org/10.31871/IJNTR.9.3.4
[27]. Shakeri, M., & Zhang, H. (2019). Moving Object
Detection Under Discontinuous Change in
Illumination Using Tensor Low-Rank and Invariant
Sparse Decomposition. Conference on Computer
Vision and Pattern Recognition. 7221-7230.

IJISRT24APR744 www.ijisrt.com 1210

You might also like