Sign Language Recognition Model Combining Non-manual Markers and Handshapes

Author(s):  
Luis Quesada ◽  
Gabriela Marín ◽  
Luis A. Guerrero
2021 ◽  
Author(s):  
Isayas Feyera ◽  
Hussien Seid

Abstract Hearing-impaired people use Sign Language to communicate with each other as well as with other communities. Usually, they are unable to communicate with normal people. Most of the people without hearing disability do not understand the Sign Language and unable to understand hearing-impaired people. So, they need recognition of Sign Language to text. In this research, a model is optimized for the recognition of Amharic Sign Language to Amharic characters. A convolutional neural network model is trained on datasets gathered from a teacher of Amharic Sign Language. Frame extraction from Amharic Sign Language video, labeling and annotation, XML creation, generate TFrecord, and training models are major general steps followed for developing models to recognize Amharic Sign Language to characters. After training of the neural network is completed, the model is saved for recognition of Sign Language from a video system or from the frame of video. The accuracy of the model is the summation of confidence of individual alphabets correctly recognized divided by the number of alphabets presented for evaluation for Faster R-CNN and SSD. Hence, the mean average accuracy of the Faster R-CNN and Single-Shot Detector is found to be 98. 25% and 96 % respectively. The model is trained and evaluated for the character of the Amharic language. The research will continue to include the remaining words and sentence used in Amharic Sign Language to have a full- edged Sign Language recognition model to a complete system.


2019 ◽  
Vol 7 (2) ◽  
pp. 43
Author(s):  
MALHOTRA POOJA ◽  
K. MANIAR CHIRAG ◽  
V. SANKPAL NIKHIL ◽  
R. THAKKAR HARDIK ◽  
◽  
...  

2016 ◽  
Vol 3 (3) ◽  
pp. 13
Author(s):  
VERMA VERSHA ◽  
PATIL SANDEEP B. ◽  
◽  

2020 ◽  
Vol 14 ◽  
Author(s):  
Vasu Mehra ◽  
Dhiraj Pandey ◽  
Aayush Rastogi ◽  
Aditya Singh ◽  
Harsh Preet Singh

Background:: People suffering from hearing and speaking disabilities have a few ways of communicating with other people. One of these is to communicate through the use of sign language. Objective:: Developing a system for sign language recognition becomes essential for deaf as well as a mute person. The recognition system acts as a translator between a disabled and an able person. This eliminates the hindrances in exchange of ideas. Most of the existing systems are very poorly designed with limited support for the needs of their day to day facilities. Methods:: The proposed system embedded with gesture recognition capability has been introduced here which extracts signs from a video sequence and displays them on screen. On the other hand, a speech to text as well as text to speech system is also introduced to further facilitate the grieved people. To get the best out of human computer relationship, the proposed solution consists of various cutting-edge technologies and Machine Learning based sign recognition models which have been trained by using Tensor Flow and Keras library. Result:: The proposed architecture works better than several gesture recognition techniques like background elimination and conversion to HSV because of sharply defined image provided to the model for classification. The results of testing indicate reliable recognition systems with high accuracy that includes most of the essential and necessary features for any deaf and dumb person in his/her day to day tasks. Conclusion:: It’s the need of current technological advances to develop reliable solutions which can be deployed to assist deaf and dumb people to adjust to normal life. Instead of focusing on a standalone technology, a plethora of them have been introduced in this proposed work. Proposed Sign Recognition System is based on feature extraction and classification. The trained model helps in identification of different gestures.


Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


Author(s):  
Safayet Anowar Shurid ◽  
Khandaker Habibul Amin ◽  
Md. Shahnawaz Mirbahar ◽  
Dolan Karmaker ◽  
Mohammad Tanvir Mahtab ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document