Deep Learning Methods for Indian Sign Language Recognition

Author(s):  
Pratik Likhar ◽  
Neel Kamal Bhagat ◽  
Rathna G N
2021 ◽  
Author(s):  
Priyank Mistry ◽  
Vedang Jotaniya ◽  
Parth Patel ◽  
Narendra Patel ◽  
Mosin Hasan

2019 ◽  
Vol 7 (2) ◽  
pp. 43
Author(s):  
MALHOTRA POOJA ◽  
K. MANIAR CHIRAG ◽  
V. SANKPAL NIKHIL ◽  
R. THAKKAR HARDIK ◽  
◽  
...  

Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


Author(s):  
Safayet Anowar Shurid ◽  
Khandaker Habibul Amin ◽  
Md. Shahnawaz Mirbahar ◽  
Dolan Karmaker ◽  
Mohammad Tanvir Mahtab ◽  
...  

Author(s):  
Ala Addin I. Sidig ◽  
Hamzah Luqman ◽  
Sabri Mahmoud ◽  
Mohamed Mohandes

Sign language is the major means of communication for the deaf community. It uses body language and gestures such as hand shapes, lib patterns, and facial expressions to convey a message. Sign language is geography-specific, as it differs from one country to another. Arabic Sign language is used in all Arab countries. The availability of a comprehensive benchmarking database for ArSL is one of the challenges of the automatic recognition of Arabic Sign language. This article introduces KArSL database for ArSL, consisting of 502 signs that cover 11 chapters of ArSL dictionary. Signs in KArSL database are performed by three professional signers, and each sign is repeated 50 times by each signer. The database is recorded using state-of-art multi-modal Microsoft Kinect V2. We also propose three approaches for sign language recognition using this database. The proposed systems are Hidden Markov Models, deep learning images’ classification model applied on an image composed of shots of the video of the sign, and attention-based deep learning captioning system. Recognition accuracies of these systems indicate their suitability for such a large number of Arabic signs. The techniques are also tested on a publicly available database. KArSL database will be made freely available for interested researchers.


Sign in / Sign up

Export Citation Format

Share Document