Signet: A Deep Learning based Indian Sign Language Recognition System

Author(s):  
Sruthi C J ◽  
Lijiya A
2019 ◽  
Vol 7 (2) ◽  
pp. 43
Author(s):  
MALHOTRA POOJA ◽  
K. MANIAR CHIRAG ◽  
V. SANKPAL NIKHIL ◽  
R. THAKKAR HARDIK ◽  
◽  
...  

Communication is one of the basic requirements for living in the world. Deaf and dumb people convey through Sign Language but normal people have difficulty to understand their language. In order to provide a source of medium between normal and differently abled people, Sign Language Recognition System (SLR) is a solution . American Sign Language (ASL) has attracted many researchers’ attention but Indian Sign Language Recognition (ISLR) is significantly different from ASL due to different phonetic, grammar and hand movement. Designing a system for Indian Sign Language Recognition becomes a difficult task. ISLR system uses Indian Sign Language (ISL) dataset for recognition but suffers from problem of scaling, object orientation and lack of optimal feature set. In this paper to address these issues, Scale-Invariant Feature Transform (SIFT) as a descriptor is used. It extracts the features that train the Feed Forward Back Propagation Neural Network (FFBPNN) and optimize it using Artificial Bee Colony (ABC) according to the fitness function. The dataset has been collected for alphabet from the video by extracting frames and for numbers it has been created manually from deaf and dumb students of NGO “Sarthak”. It has been shown through simulation results that there has been significant improvement in accurately identifying alphabets and numbers with an average accuracy of 99.43%


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6256
Author(s):  
Boon Giin Lee ◽  
Teak-Wei Chong ◽  
Wan-Young Chung

Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) have been conducted worldwide to reduce such barriers. However, this approach is restricted by the visual angle and highly affected by environmental factors. In addition, CV usually involves the use of machine learning, which requires collaboration of a team of experts and utilization of high-cost hardware utilities; this increases the application cost in real-world situations. Thus, this study aims to design and implement a smart wearable American Sign Language (ASL) interpretation system using deep learning, which applies sensor fusion that “fuses” six inertial measurement units (IMUs). The IMUs are attached to all fingertips and the back of the hand to recognize sign language gestures; thus, the proposed method is not restricted by the field of view. The study reveals that this model achieves an average recognition rate of 99.81% for dynamic ASL gestures. Moreover, the proposed ASL recognition system can be further integrated with ICT and IoT technology to provide a feasible solution to assist hearing-impaired people in communicating with others and improve their quality of life.


2021 ◽  
Author(s):  
Priyank Mistry ◽  
Vedang Jotaniya ◽  
Parth Patel ◽  
Narendra Patel ◽  
Mosin Hasan

Sign in / Sign up

Export Citation Format

Share Document