Recognition of Indian Sign Language Alphabets for Hearing and Speech Impaired People using Deep Learning

Author(s):  
Kirtee Pardeshi ◽  
R. Sreemathy ◽  
Akshay Velapure
Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


2021 ◽  
Vol 11 (8) ◽  
pp. 3439
Author(s):  
Debashis Das Chakladar ◽  
Pradeep Kumar ◽  
Shubham Mandal ◽  
Partha Pratim Roy ◽  
Masakazu Iwamura ◽  
...  

Sign language is a visual language for communication used by hearing-impaired people with the help of hand and finger movements. Indian Sign Language (ISL) is a well-developed and standard way of communication for hearing-impaired people living in India. However, other people who use spoken language always face difficulty while communicating with a hearing-impaired person due to lack of sign language knowledge. In this study, we have developed a 3D avatar-based sign language learning system that converts the input speech/text into corresponding sign movements for ISL. The system consists of three modules. Initially, the input speech is converted into an English sentence. Then, that English sentence is converted into the corresponding ISL sentence using the Natural Language Processing (NLP) technique. Finally, the motion of the 3D avatar is defined based on the ISL sentence. The translation module achieves a 10.50 SER (Sign Error Rate) score.


Webology ◽  
2021 ◽  
Vol 18 (Special Issue 01) ◽  
pp. 196-210
Author(s):  
Dr.P. Golda Jeyasheeli ◽  
N. Indumathi

Nowadays the interaction among deaf and mute people and normal people is difficult, because normal people scuffle to understand the sense of the gestures. The deaf and dumb people find problem in sentence formation and grammatical correction. To alleviate the issues faced by these people, an automatic sign language sentence generation approach is propounded. In this project, Natural Language Processing (NLP) based methods are used. NLP is a powerful tool for translation in the human language and also responsible for the formation of meaningful sentences from sign language symbols which is also understood by the normal person. In this system, both conventional NLP methods and Deep learning NLP methods are used for sentence generation. The efficiency of both the methods are compared. The generated sentence is displayed in the android application as an output. This system aims to connect the gap in the interaction among the deaf and dumb people and the normal people.


2021 ◽  
Author(s):  
P. Golda Jeyasheeli ◽  
N. Indumathi

In Indian Population there is about 1 percent of the people are deaf and dumb. Deaf and dumb people use gestures to interact with each other. Ordinary humans fail to grasp the significance of gestures, which makes interaction between deaf and mute people hard. In attempt for ordinary citizens to understand the signs, an automated sign language identification system is proposed. A smart wearable hand device is designed by attaching different sensors to the gloves to perform the gestures. Each gesture has unique sensor values and those values are collected as an excel data. The characteristics of movements are extracted and categorized with the aid of a convolutional neural network (CNN). The data from the test set is identified by the CNN according to the classification. The objective of this system is to bridge the interaction gap between people who are deaf or hard of hearing and the rest of society.


Author(s):  
Prof. Namrata Ghuse

Gesture-based communication Recognition through innovation has been ignored idea even though an enormous local area can profit from it. There are more than 3% total population of the world who even can't speak or hear properly. Using hand Gesture-based communication, Every especially impaired people to communicate with each other or rest of world population. It is a way of correspondence for the rest of the world population who are neither speaking and hearing incompetency. Normal people even don't become intimate or close with sign language based communications. It's a reason become a gap between the especially impaired people & ordinary person. The previous systems of the project used to involve the concepts of image generation and emoji symbols. But the previous frameworks of a project are not affordable and not portable for the impaired person.The Main propaganda of a project has always been to interpret Indian Sign Language Standards and American Sign Language Standards and Convert gestures into voice and text, also assist the impaired person can interact with the other person from the remote location. This hand smart glove has been made with the set up with Gyroscope, Flex Sensor, ESP32 Microcontrollers/Micro bit, Accelerometer,25 LED Matrix Actuators/Output &, flex sensor, vibrator etc.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6256
Author(s):  
Boon Giin Lee ◽  
Teak-Wei Chong ◽  
Wan-Young Chung

Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) have been conducted worldwide to reduce such barriers. However, this approach is restricted by the visual angle and highly affected by environmental factors. In addition, CV usually involves the use of machine learning, which requires collaboration of a team of experts and utilization of high-cost hardware utilities; this increases the application cost in real-world situations. Thus, this study aims to design and implement a smart wearable American Sign Language (ASL) interpretation system using deep learning, which applies sensor fusion that “fuses” six inertial measurement units (IMUs). The IMUs are attached to all fingertips and the back of the hand to recognize sign language gestures; thus, the proposed method is not restricted by the field of view. The study reveals that this model achieves an average recognition rate of 99.81% for dynamic ASL gestures. Moreover, the proposed ASL recognition system can be further integrated with ICT and IoT technology to provide a feasible solution to assist hearing-impaired people in communicating with others and improve their quality of life.


2021 ◽  
Author(s):  
Priyank Mistry ◽  
Vedang Jotaniya ◽  
Parth Patel ◽  
Narendra Patel ◽  
Mosin Hasan

Sign in / Sign up

Export Citation Format

Share Document