You are on page 1of 8

Volume 9, Issue 4, April – 2024 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24APR2026

All Country Sign Language Recognition System to


Help Deaf and Mute People
Jami Sai Raju1; Nahak Kamal Kumar2; Uppala Hemanth Kumar3; Guntuboina Ravi Vijay Charith4; A. Subhalaxmi5
Students1,2,3,4; Professor5
Department of CSM, Raghu Institute of Technology, Dakamarri (V),
Bheemunipatnam, Visakhapatnam Dist. Pin:531162

Abstract:- With the use of contemporary technology, the To address these challenges and improve
"All Country Sign Language Recognition System to Help Communication accessible for the community of the deaf
Deaf and Mute People" project seeks to provide a strong and silent.The goal of the project is to create a "Sign
solution that will close communication barriers between Language Recognition System."The goal of this system is to
the deaf and mute population and the general public. take use of developments in machine learning and computer
The project aims to use machine learning techniques and vision methodsforrecognize and interpret ISL, ASL gestures
computer vision techniques to properly understand and accurately.
convert sign language motions into generally used
languages, with a concentration on Indian sign language. The project's main goal is to develop a reliable and
Deaf and mute people will benefit from increased user-friendly system that can accurately interpret ISL
accessibility and inclusion in a variety of spheres of gestures in real-time and translate them into spoken or
everyday life, including social interactions, work, and written language. By doing so, the system will enable deaf
education, by virtue of the implementation of this and mute individuals to communicate more effectively with
system. The initiative helps to create a more inclusive the hearing community, thereby promoting inclusivity and
society in addition to addressing the urgent demand for accessibility in various spheres of life.
efficient communication tools for the deaf and mute
communities.In this people can interact using different The project will involve the design and implementation
country sign languages like ISL, ASL, BSL, etc. of algorithms capable of recognizing and interpreting the
intricate hand movements, facial expressions, and body
Keywords:- All Country Sign Language Recognition gestures that constitute ISL and ASL. These algorithms will
System, Deaf and Mute Communication, Computer Vision be trained using a comprehensive dataset of ISL and ASL
Techniques, Machine Learning Algorithms, Accessibility and gestures, encompassing a wide range of vocabulary and
Inclusivity, Communication Technology, Education and expressions commonly used in everyday communication.
Employment, Social Interaction, Inclusive Society, Modern
Communication Solutions. Furthermore, the project will explore the integration of
natural language processing techniques to facilitate
I. INTRODUCTION bidirectional communication, allowing the system to not
only interpreted gestures but also generate appropriate
Communication is an essential aspect of responses in spoken or written language.
humaninteraction, facilitating the exchange of ideas,
emotions, and information. However, for individuals who Overall, the Sign Language Recognition System
are deaf and mute, conventional modes of communication project represents a significant step towards addressing the
such as spoken language are often inaccessible. Instead, they communication difficulties that the Indian population of the
rely on sign languages, which communicate message by deaf and dumb faces. By harnessing the power of
combining hand gestures, facial expressions, and body technology to facilitate communication, the project aims to
language. In India, For the deaf and mute people, Indian empower individuals with hearing and speech impairments,
Sign Language (ISL) is the major form of communication. promoting their inclusion and participation in society. The
In the same way, the primary languages of every country below picture represents the difference between ISL and
differ. ASL but, exact signs are not used in development.

Despite the importance of sign language in facilitating


communication, there is a substantial communication gap
between the general public and deaf and mute people, which
is mostly caused by the latter group's poor grasp of sign
language.This communication gap can lead to social
isolation, limited educational opportunities, and challenges
in accessing essential services.

IJISRT24APR2026 www.ijisrt.com 1215


Volume 9, Issue 4, April – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24APR2026

B. Sign Language Corpus:


Buildinga comprehensive dataset of ISL and ASL
gestures is essential for training and evaluating sign
language recognition systems. Researchers have worked on
creating annotated corpora of ISL gestures, which serve as
valuable resources for developing and testing recognition
algorithms (Pradhan et al., 2019).

 Deep Learning Techniques:


Convolutional neural networks, also known as CNNs,
and neural networks with recurrent connections, in
particular, are two deep learning approaches that have
shown to beextremely effective for sign language
detection.These approaches have been applied to various
sign languages, including ISL and ASL, with promising
Fig 1: Each Sign Represents Each Word results (Panda et al., 2020).

 Diffrence between Existing and Proposed Systems: C. Real-Time Recognition Systems:


In the existing system they have used LSTM algorithm The development of real-time sign language
and our proposed system also consists of same algorithm but recognition systems is crucial for enabling seamless
we have found some drawbacks in the existing system. The communication between deaf and mute individuals and the
drawbacks in their existing system they have assigned each hearing community. Researchers have explored the
symbol with each word. By doing these outcomes are may implementation of real-time recognition systems using
be wrong because, let’s take a word “accident” and assign it techniques such as feature extraction, classification, and
to a symbol. Different users represent accident with different gesture tracking (Kumar et al., 2020).
symbols. Then system cannot recognize correctly and user
also don’t know which symbol is assigned. So that we D. Challenges And Limitations:
proposed a system where each alphabet is assigned with Considering the advancements in studies regarding
each symbol. It is easy to memorize 26 symbols compared gesture recognition, several challenges remain. These
to memorize many symbols. The symbols are mentioned in include variations in sign language gestures among users,
the below image. In our proposed system we have integrated occlusion due to hand movements, and the need for
some features like live video translation, multiple country robustness in diverse environments. Addressing these
sign language recognition automatically. challenges requires ongoing research efforts and the
development of innovative solutions (Rahman et al., 2020).
II. LITERATURE REVIEW
 Applications and Impact: Sign language recognition
The creation of systems for recognizing sign language systems have the potential to impact various domains,
has attracted a lot of interest lately driven by the need to including education, healthcare, and accessibility. These
address communication barriers faced by deaf and mute systems can facilitate communication between deaf and
individuals worldwide. While several sign languages exist mute individuals and hearing individuals, thereby
globally so, the focus of this literature review to combine all promoting inclusivity and improving quality of life
the different sign languages into a single platform. To (Banerjee et al., 2021).
provide user friendly services in this “All country sign
language recognition system”. In conclusion, the literature review highlights the
development of research on recognition of sign languages,
A. Gesture Recognition Techniques: especially in relation to finger language. While significant
Research in gesture recognition techniques forms the advancements have been achieved, there are still
foundation of sign language recognition systems. Various opportunities for further research and development to
approaches, including sensor-based techniques and overcome existing challenges and maximize the impact of
computer vision-based techniques, have been explored. sign language recognitionsystems in promoting
Computer vision techniques, methods based on deep communication accessibility for the deaf and mute
learning, for example, have demonstrated promise in community.
precisely recognizing and interpreting sign language
gestures (Li et al., 2022).

IJISRT24APR2026 www.ijisrt.com 1216


Volume 9, Issue 4, April – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24APR2026

Fig 2: General Process of Sign Language Recognition System

E. Module Wise Functional Requirements  Gesture Recognition Module:


Here are the module-wise functional requirements for
the Indian Sign Language (ISL) recognition system:  The system shall classify extracted features to recognize
ISL gestures.
 Input Module  The system shall support a library of predefined ISL
gestures for recognition.
 The system shall capture live video streams from a  The system shall provide real-time feedback on
webcam or camera. recognized gestures.
 The system shall support the selection of video input
source (webcam or camera).  Translation Module:
 The system shall allow users to start and stop video
capture.  The system shall translate recognized ISL gestures into
spoken language.
 Preprocessing Module:  The system shall translate recognized ISL gestures into
written language.
 The system shall convert captured video frames to  The system shall provide options for selecting target
grayscale. languages for translation.
 The system shall apply noise reduction techniques to
improve the quality of video frames.  Output Module:
 The system shall perform background subtraction to
isolate the signer's hand from the background.  The system shall display the translated output in real-
time.
 Feature Extraction Module:  The system shall provide audio output for synthesized
speech.
 The system shall extract hand shape features from pre-  The system shall display visual feedback indicating the
processed video frames. confidence level or accuracy of recognized gestures.
 The system shall analyse the movement trajectory of the
signer's hand over time.
 The system shall detect finger configurations, including
open, closed, or specific finger positions.

IJISRT24APR2026 www.ijisrt.com 1217


Volume 9, Issue 4, April – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24APR2026

 User Interface:  Reliability:

 The system shall have a user-friendly graphical user  The system shall be robust and resilient to errors,
interface (GUI). recovering gracefully from unexpected failures or
 The GUI shall display the live video stream with overlay interruptions.
for recognized gestures.  The system shall have a mean time between failures
(MTBF) of at least [X] hours under normal operating
F. Non-Functional Requirements conditions.
Non-functional requirements define the qualities or
attributes of the system that are not directly related to its  Security:
functionality but are crucial for ensuring its overall
effectiveness, usability, performance, and reliability. Here  The system shall protect user privacy and confidentiality
are the non-functional requirements for the Sign Language by securely handling captured video data and translated
recognition system: output.
 The system shall implement user authentication
 Performance: mechanisms to prevent unauthorized access to sensitive
features or settings.
 The system shall have low latency, providing real-time
recognition and translation of ISL gestures.  Scalability:
 The system shall be capable of handling multiple
simultaneous users without significant degradation in  The system architecture shall be scalable, allowing for
performance. easy expansion to accommodate increasing numbers of
users or additional functionality.
 Accuracy:  The system shall support distributed deployment across
multiple servers or nodes to distribute processing load
 The system shall achieve a minimum accuracy rate of and improve scalability.
[X]% in recognizing ISL gestures.
 The system shall minimize false positives and false  Compatibility:
negatives in gesture recognition to ensure reliable
performance.  The system shall be compatible with a wide range of
web browsers and devices, including desktops, laptops,
 Usability: tablets, and mobile phones.
 The system shall support integration with external
 The user interface shall be intuitive and easy to use, systems or APIs for language translation, speech
requiring minimal training for users to operate the synthesis.
system.
 The system shall provide clear and informative feedback
to users, indicating the status of gesture recognition and
translation processes.

Fig 3: Work Flow of Sign Recognition with Live Video

IJISRT24APR2026 www.ijisrt.com 1218


Volume 9, Issue 4, April – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24APR2026

 Maintainability: Explore techniques for representing gesture features in


a compact and discriminative manner, suitable for input to
 The system shall be modular and well-structured, machine learning algorithms.
facilitating ease of maintenance, updates, and
enhancements.  Gesture Recognition Algorithms:
 The system shall include comprehensive documentation, Develop and implement advanced gesture recognition
code comments, and version control to support ongoing algorithms, leveraging computer vision techniques such as
maintenance and development efforts. deep learning.

III. METHODOLOGY Learn how to use machine learning models, such as


RNNs (recurrent neural networks) and neural networks with
This section discusses the dataset used and the convolution (CNNs)., on the extracted features to recognize
methods adopted for sign language recognition. and classify gestures accurately. The Long Short-Term
Memory (LSTM) algorithm is used to deploy this project.
 Data Collection and Preprocessing: Because, LSTM can memorize the most recent occurred
Gather a comprehensive dataset of different country outputs in their network and it gives better outputs than
gestures, encompassing a wide range of vocabulary and HMM algorithm.
expressions commonly used in everyday communication.
 Real-Time Recognition System:
Preprocess the data to standardize format, remove Design and implement a real-time sign recognition
noise, and ensure consistency in gesture annotations. system capable of processing live video or image streams
and interpreting gestures in real-time.
 Feature Extraction and Representation:
Extract relevant features from the pre-processed Optimize the system architecture and algorithms for
gesture data, including hand shape, movement trajectory, low latency and high throughput, ensuring smooth and
finger configurations, and facial expressions. responsive performance during interaction by the user in
front of camera.

Table 1: The Top 10 Test Cases in this Model


Test Case ID Test Case Description Expected Result Pass/Fail
Capture live video stream from webcam or
TC_001 camera Video stream is displayed in the application window Pass
TC_002 Convert captured video frames to gray scale Video stream appears in grayscale Pass
Apply noise reduction techniques to Reduction in visual noise and improvement in
TC_003 improve video quality clarity Pass
Perform background subtraction to isolate
TC_004 signer's hand Signer's hand is separated from the background Pass
Extract hand shape features from video
TC_005 frames Features such as hand shape are accurately detected Pass
Analyze movement trajectory of signer's
TC_006 hand Movement trajectory is accurately tracked Pass
Detect finger configurations, including open
TC_007 and closed Finger configurations are correctly identified Pass
Classify extracted features to recognize ISL
TC_008 gestures ISL gestures are accurately recognized Pass
TC_009 Display translated output in real-time Translated output is displayed on the user interface Pass
TC_010 Provide audio output for synthesized speech Speech output is audible to the user Pass

 Translation and Output Generation:  User Interface Development:


Utilize NLP (natural language processing) approaches Design a user-friendly interface that displays
to convert acknowledged motions in visual language into recognized gestures, translated output, and feedback
spoken or written words. mechanisms for user interaction.

Generate appropriate output, such as synthesized Incorporate interactive elements and accessibility
speech or text, based on the recognized gestures, enabling features to accommodate diverse user needs and preferences.
bidirectional communication between deaf and mute
individuals and the hearing community.  Testing and Evaluation
Conduct thorough testing and evaluation of the Sign
LanguageRecognition System, including performance
benchmarking, accuracy assessment, and usability

IJISRT24APR2026 www.ijisrt.com 1219


Volume 9, Issue 4, April – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24APR2026

testing.Solicit feedback from deaf and mute individuals, technologiessuch as algorithms for deep learning, neural
caregivers, and domain experts to validate the productivity networks with recurrent connections (RNNs), and
and usability to the system. convolutional artificial neural networks (CNNs), the gesture
recognition system achieves high levels of accuracy and
 Deployment and Integration: efficiency in interpreting sign language gestures in real-time.
Deploy the ISL recognition system in relevant settings,
such as schools, community centres, or assistive technology Furthermore, the project has highlighted the
platforms, to facilitate communication for deaf and mute importance of user-cantered design and accessibility
individuals. considerations in the development of communication
technologies for individuals with disabilities. Usability
Integrate the system with existing communication tools testing, user feedback sessions, and collaboration with deaf
and assistive technologies to enhance accessibility and and mute communities have informed the development and
inclusivity for the target user population. application of the ISL recognizing system, making certain
that it satisfies the various requirements and inclinations of
By using this technique, the project hopes to create a its users.
reliable and efficient sign identification system that meets
the communication needs of those who are deaf or mute and, Looking ahead, the project presents numerous
in the process, promotes accessibility and inclusion in Indian opportunities for future enhancements and expansions,
culture. including improving gesture recognition accuracy,
expanding language support, developing mobile and
IV. RESULTS AND CONCLUSION wearable applications, and fostering community engagement
and collaboration. By continuing to innovate and iterate
In conclusion, the All-Country Sign Language upon the sign recognition system, researchers, developers,
recognition project holds significant promise in improving and stakeholders can further advance communication
communication accessibility and inclusivity for deaf and accessibility and empower deaf and mute individuals to
mute individuals. Through the development of innovative communicate more effectively and inclusively in diverse
technology and algorithms, the objective of the project is to settings.
close the disparity in communication between hearing and
deaf people, allowing those with hearing loss to convey In summary, the Indian Sign Language recognition
emotions more fully and effectively and to engage better project represents a significant step forward in leveraging
freely into public. technology to break down communication barriers and
promoteinclusivity, accessibility, and empowerment for deaf
The project has demonstrated the feasibility and and mute individuals. Through ongoing research,
effectiveness of using methods from vision, learning development, and collaboration, the project holds the
machines, and the processing of natural languageto potential to make a lasting influence the lives of people with
recognize ISL and ASL gestures and translate them into disabilities and help create a society that is more equal and
spoken or written language. By leveraging advanced inclusive.

(a) YES (b) NO

IJISRT24APR2026 www.ijisrt.com 1220


Volume 9, Issue 4, April – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24APR2026

(c) Hai (d) Thankyou

(e) Pet (f) Water


Fig 4: Final Outputs Predicted by the Model

REFERENCES [5]. Bhusari, S., Ghiya, R., & Vishwakarma, A. K.


(2018). Indian Sign Language (ISL) Recognition
[1]. D'Innocenzo, A., Pino, C., Russo, P., & Tarantino, P. Using Artificial Neural Network. In 2020 3rd
(2019). Sign Language Recognition System for Deaf International Conference for Convergence in
and Dumb People. In 2019 10th IEEE International Technology (I2CT) (pp. 1-5). IEEE.
Conference on Intelligent Data Acquisition and [6]. Pujari, N. M., Ghodasara, Y. S., &Gujarathi, G. S.
Advanced Computing Systems: Technology and (2020). Real-time Indian Sign Language Recognition
Applications (IDAACS) (pp. 874-878). IEEE. System. In 2020 IEEE International Students'
[2]. Mishra, P., & Namboodiri, V. (2020). Real-time Conference on Electrical, Electronics and Computer
Indian Sign Language Recognition using Deep Science (SCEECS) (pp. 1-4). IEEE.
Learning. arXiv preprint arXiv:2006.15743. [7]. Patel, P. K., & Patel, V. P. (2019). Sign Language
[3]. Sahu, P., & Tripathy, R. M. (2018). Deep Learning- Recognition System for Indian Sign Language. In
Based Indian Sign Language (ISL) Recognition 2019 International Conference on Inventive Research
System. In 2018 5th International Conference on in Computing Applications (ICIRCA) (pp. 1-4).
Industrial Engineering and Applications (ICIEA) (pp. IEEE.
51-55). IEEE. [8]. Shabaz, M., Darshan, K. N., & Shabaz, M. (2018).
[4]. Baid, A., Goyal, R., & Agrawal, S. (2021). Indian Real-time Hand Gesture Recognition using
Sign Language Recognition using Convolutional Convolutional Neural Networks for Indian Sign
Neural Networks. In 2021 International Conference Language. In 2018 International Conference on
on Intelligent Sustainable Systems (ICISS) (pp. 543- Innovations in Information, Embedded and
548). IEEE. Communication Systems (ICIIECS) (pp. 1-5). IEEE.

IJISRT24APR2026 www.ijisrt.com 1221


Volume 9, Issue 4, April – 2024 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165 https://doi.org/10.38124/ijisrt/IJISRT24APR2026

[9]. Raut, S. S., & Wagh, P. R. (2019). Indian Sign


Language Recognition System using Convolutional
Neural Networks. In 2019 IEEE Bombay Section
Signature Conference (IBSSC) (pp. 1-6). IEEE.
[10]. Tripathy, R. M., & Sahu, P. (2017). A Review on
Hand Gesture Recognition System for Indian Sign
Language. International Journal of Information
Technology and Computer Science, 9(1), 56-62.
[11]. Jadhav, R. S., & Bhoyar, S. R. (2018). Real-Time
Indian Sign Language (ISL) Recognition System
Using Neural Network. In 2018 International
Conference on Inventive Research in Computing
Applications (ICIRCA) (pp. 1-4). IEEE.
[12]. Kumar, V., & Kumar, M. (2021). Sign Language
Recognition: A Review on Techniques and
Challenges. In 2021 International Conference on
Power, Energy, Control and Transmission Systems
(ICPECTS) (pp. 33-38). IEEE.
[13]. Mathur, R., & Sanjay, A. (2018). Survey on Indian
Sign Language Recognition Using Neural Network.
In 2018 3rd International Conference for
Convergence in Technology (I2CT) (pp. 1-5). IEEE.
[14]. Sangani, A. J., & Jivani, N. P. (2019). Indian Sign
Language Recognition Using Convolutional Neural
Network. In 2019 International Conference on
Innovative Research in Engineering and Technology
(ICIRET) (pp. 1-5). IEEE.
[15]. Thool, K., & Pawar, K. (2019). Indian Sign
Language Recognition using CNN. In 2019
International Conference on Communication and
Signal Processing (ICCSP) (pp. 0392-0396). IEEE.
[16]. Naik, M., & Singh, K. (2020). Indian Sign Language
Recognition using CNN and LSTM. In 2020 IEEE
International Students' Conference on Electrical,
Electronics and Computer Science (SCEECS) (pp. 1-
4). IEEE.
[17]. Rathod, P., &Kharde, S. (2019). Hand Gesture
Recognition System for Indian Sign Language. In
2019 3rd International conference on Electronics,
Communication and Aerospace Technology (ICECA)
(pp. 676-680). IEEE.
[18]. Agarwal, A., & Dubey, R. (2020). Indian Sign
Language Recognition System. In 2020 IEEE
International Conference for Innovation in
Technology (INOCON) (pp. 1-4). IEEE.
[19]. Yadav, R., & Soni, R. (2021). Sign Language
Recognition System: A Review. In 2021 International
Conference on Inventive Computation Technologies
(ICICT) (pp. 50-54). IEEE.
[20]. Singh, N., & Gopal, L. (2019). A Survey on Indian
Sign Language Recognition Techniques. In 2019
International Conference on Computer
Communication and Informatics (ICCCI) (pp. 1-5).
IEEE.

IJISRT24APR2026 www.ijisrt.com 1222

You might also like