Detection and Recognition of Hand Gestures for Indian Sign Language Recognition System

Author(s):  
Mitashi Bansal ◽  
Sumita Gupta
2019 ◽  
Vol 7 (2) ◽  
pp. 43
Author(s):  
MALHOTRA POOJA ◽  
K. MANIAR CHIRAG ◽  
V. SANKPAL NIKHIL ◽  
R. THAKKAR HARDIK ◽  
◽  
...  

Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


Communication is one of the basic requirements for living in the world. Deaf and dumb people convey through Sign Language but normal people have difficulty to understand their language. In order to provide a source of medium between normal and differently abled people, Sign Language Recognition System (SLR) is a solution . American Sign Language (ASL) has attracted many researchers’ attention but Indian Sign Language Recognition (ISLR) is significantly different from ASL due to different phonetic, grammar and hand movement. Designing a system for Indian Sign Language Recognition becomes a difficult task. ISLR system uses Indian Sign Language (ISL) dataset for recognition but suffers from problem of scaling, object orientation and lack of optimal feature set. In this paper to address these issues, Scale-Invariant Feature Transform (SIFT) as a descriptor is used. It extracts the features that train the Feed Forward Back Propagation Neural Network (FFBPNN) and optimize it using Artificial Bee Colony (ABC) according to the fitness function. The dataset has been collected for alphabet from the video by extracting frames and for numbers it has been created manually from deaf and dumb students of NGO “Sarthak”. It has been shown through simulation results that there has been significant improvement in accurately identifying alphabets and numbers with an average accuracy of 99.43%


2020 ◽  
Vol 5 (1) ◽  
Author(s):  
Kudirat O Jimoh ◽  
Anuoluwapo O Ajayi ◽  
Ibrahim K Ogundoyin

An android based sign language recognition system for selected English vocabularies was developed with the explicit objective to examine the specific characteristics that are responsible for gestures recognition. Also, a recognition model for the process was designed, implemented, and evaluated on 230 samples of hand gestures.  The collected samples were pre-processed and rescaled from 3024 ×4032 pixels to 245 ×350 pixels. The samples were examined for the specific characteristics using Oriented FAST and Rotated BRIEF, and the Principal Component Analysis used for feature extraction. The model was implemented in Android Studio using the template matching algorithm as its classifier. The performance of the system was evaluated using precision, recall, and accuracy as metrics. It was observed that the system obtained an average classification rate of 87%, an average precision value of 88% and 91% for the average recall rate on the test data of hand gestures.  The study, therefore, has successfully classified hand gestures for selected English vocabularies. The developed system will enhance the communication skills between hearing and hearing-impaired people, and also aid their teaching and learning processes. Future work include exploring state-of-the-art machining learning techniques such Generative Adversarial Networks (GANs) for large dataset to improve the accuracy of results. Keywords— Feature extraction; Gestures Recognition; Sign Language; Vocabulary, Android device.


Sign in / Sign up

Export Citation Format

Share Document