Real time Indian Sign Language Recognition System to aid deaf-dumb people

Author(s):  
P. Subha Rajam ◽  
G. Balakrishnan
2019 ◽  
Vol 7 (2) ◽  
pp. 43
Author(s):  
MALHOTRA POOJA ◽  
K. MANIAR CHIRAG ◽  
V. SANKPAL NIKHIL ◽  
R. THAKKAR HARDIK ◽  
◽  
...  

Author(s):  
Zhibo Wang ◽  
Tengda Zhao ◽  
Jinxin Ma ◽  
Hongkai Chen ◽  
Kaixin Liu ◽  
...  

2021 ◽  
Vol 9 (1) ◽  
pp. 182-203
Author(s):  
Muthu Mariappan H ◽  
Dr Gomathi V

Dynamic hand gesture recognition is a challenging task of Human-Computer Interaction (HCI) and Computer Vision. The potential application areas of gesture recognition include sign language translation, video gaming, video surveillance, robotics, and gesture-controlled home appliances. In the proposed research, gesture recognition is applied to recognize sign language words from real-time videos. Classifying the actions from video sequences requires both spatial and temporal features. The proposed system handles the former by the Convolutional Neural Network (CNN), which is the core of several computer vision solutions and the latter by the Recurrent Neural Network (RNN), which is more efficient in handling the sequences of movements. Thus, the real-time Indian sign language (ISL) recognition system is developed using the hybrid CNN-RNN architecture. The system is trained with the proposed CasTalk-ISL dataset. The ultimate purpose of the presented research is to deploy a real-time sign language translator to break the hurdles present in the communication between hearing-impaired people and normal people. The developed system achieves 95.99% top-1 accuracy and 99.46% top-3 accuracy on the test dataset. The obtained results outperform the existing approaches using various deep models on different datasets.


Communication is one of the basic requirements for living in the world. Deaf and dumb people convey through Sign Language but normal people have difficulty to understand their language. In order to provide a source of medium between normal and differently abled people, Sign Language Recognition System (SLR) is a solution . American Sign Language (ASL) has attracted many researchers’ attention but Indian Sign Language Recognition (ISLR) is significantly different from ASL due to different phonetic, grammar and hand movement. Designing a system for Indian Sign Language Recognition becomes a difficult task. ISLR system uses Indian Sign Language (ISL) dataset for recognition but suffers from problem of scaling, object orientation and lack of optimal feature set. In this paper to address these issues, Scale-Invariant Feature Transform (SIFT) as a descriptor is used. It extracts the features that train the Feed Forward Back Propagation Neural Network (FFBPNN) and optimize it using Artificial Bee Colony (ABC) according to the fitness function. The dataset has been collected for alphabet from the video by extracting frames and for numbers it has been created manually from deaf and dumb students of NGO “Sarthak”. It has been shown through simulation results that there has been significant improvement in accurately identifying alphabets and numbers with an average accuracy of 99.43%


Sign in / Sign up

Export Citation Format

Share Document