scholarly journals Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning

Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6256
Author(s):  
Boon Giin Lee ◽  
Teak-Wei Chong ◽  
Wan-Young Chung

Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) have been conducted worldwide to reduce such barriers. However, this approach is restricted by the visual angle and highly affected by environmental factors. In addition, CV usually involves the use of machine learning, which requires collaboration of a team of experts and utilization of high-cost hardware utilities; this increases the application cost in real-world situations. Thus, this study aims to design and implement a smart wearable American Sign Language (ASL) interpretation system using deep learning, which applies sensor fusion that “fuses” six inertial measurement units (IMUs). The IMUs are attached to all fingertips and the back of the hand to recognize sign language gestures; thus, the proposed method is not restricted by the field of view. The study reveals that this model achieves an average recognition rate of 99.81% for dynamic ASL gestures. Moreover, the proposed ASL recognition system can be further integrated with ICT and IoT technology to provide a feasible solution to assist hearing-impaired people in communicating with others and improve their quality of life.

Sign language is the only method of communication for the hearing and speech impaired people around the world. Most of the speech and hearing-impaired people understand single sign language. Thus, there is an increasing demand for sign language interpreters. For regular people learning sign language is difficult, and for speech and hearing-impaired person, learning spoken language is impossible. There is a lot of research being done in the domain of automatic sign language recognition. Different methods such as, computer vision, data glove, depth sensors can be used to train a computer to interpret sign language. The interpretation is being done from sign to text, text to sign, speech to sign and sign to speech. Different countries use different sign languages, the signers of different sign languages are unable to communicate with each other. Analyzing the characteristic features of gestures provides insights about the sign language, some common features in sign languages gestures will help in designing a sign language recognition system. This type of system will help in reducing the communication gap between sign language users and spoken language users.


2021 ◽  
Author(s):  
Ishika Godage ◽  
Ruvan Weerasignhe ◽  
Damitha Sandaruwan

It is no doubt that communication plays a vital role in human life. There is, however, a significant population of hearing-impaired people who use non-verbal techniques for communication, which a majority of the people cannot understand. The predominant of these techniques is based on sign language, the main communication protocol among hearing impaired people. In this research, we propose a method to bridge the communication gap between hearing impaired people and others, which translates signed gestures into text. Most existing solutions, based on technologies such as Kinect, Leap Motion, Computer vision, EMG and IMU try to recognize and translate individual signs of hearing impaired people. The few approaches to sentence-level sign language recognition suffer from not being user-friendly or even practical owing to the devices they use. The proposed system is designed to provide full freedom to the user to sign an uninterrupted full sentence at a time. For this purpose, we employ two Myo armbands for gesture-capturing. Using signal processing and supervised learning based on a vocabulary of 49 words and 346 sentences for training with a single signer, we were able to achieve 75-80% word-level accuracy and 45-50% sentence level accuracy using gestural (EMG) and spatial (IMU) features for our signer-dependent experiment.


Author(s):  
Prof. Namrata Ghuse

Gesture-based communication Recognition through innovation has been ignored idea even though an enormous local area can profit from it. There are more than 3% total population of the world who even can't speak or hear properly. Using hand Gesture-based communication, Every especially impaired people to communicate with each other or rest of world population. It is a way of correspondence for the rest of the world population who are neither speaking and hearing incompetency. Normal people even don't become intimate or close with sign language based communications. It's a reason become a gap between the especially impaired people & ordinary person. The previous systems of the project used to involve the concepts of image generation and emoji symbols. But the previous frameworks of a project are not affordable and not portable for the impaired person.The Main propaganda of a project has always been to interpret Indian Sign Language Standards and American Sign Language Standards and Convert gestures into voice and text, also assist the impaired person can interact with the other person from the remote location. This hand smart glove has been made with the set up with Gyroscope, Flex Sensor, ESP32 Microcontrollers/Micro bit, Accelerometer,25 LED Matrix Actuators/Output &, flex sensor, vibrator etc.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3554 ◽  
Author(s):  
Teak-Wei Chong ◽  
Boon-Giin Lee

Sign language is intentionally designed to allow deaf and dumb communities to convey messages and to connect with society. Unfortunately, learning and practicing sign language is not common among society; hence, this study developed a sign language recognition prototype using the Leap Motion Controller (LMC). Many existing studies have proposed methods for incomplete sign language recognition, whereas this study aimed for full American Sign Language (ASL) recognition, which consists of 26 letters and 10 digits. Most of the ASL letters are static (no movement), but certain ASL letters are dynamic (they require certain movements). Thus, this study also aimed to extract features from finger and hand motions to differentiate between the static and dynamic gestures. The experimental results revealed that the sign language recognition rates for the 26 letters using a support vector machine (SVM) and a deep neural network (DNN) are 80.30% and 93.81%, respectively. Meanwhile, the recognition rates for a combination of 26 letters and 10 digits are slightly lower, approximately 72.79% for the SVM and 88.79% for the DNN. As a result, the sign language recognition system has great potential for reducing the gap between deaf and dumb communities and others. The proposed prototype could also serve as an interpreter for the deaf and dumb in everyday life in service sectors, such as at the bank or post office.


2021 ◽  
Author(s):  
Isayas Feyera ◽  
Hussien Seid

Abstract Hearing-impaired people use Sign Language to communicate with each other as well as with other communities. Usually, they are unable to communicate with normal people. Most of the people without hearing disability do not understand the Sign Language and unable to understand hearing-impaired people. So, they need recognition of Sign Language to text. In this research, a model is optimized for the recognition of Amharic Sign Language to Amharic characters. A convolutional neural network model is trained on datasets gathered from a teacher of Amharic Sign Language. Frame extraction from Amharic Sign Language video, labeling and annotation, XML creation, generate TFrecord, and training models are major general steps followed for developing models to recognize Amharic Sign Language to characters. After training of the neural network is completed, the model is saved for recognition of Sign Language from a video system or from the frame of video. The accuracy of the model is the summation of confidence of individual alphabets correctly recognized divided by the number of alphabets presented for evaluation for Faster R-CNN and SSD. Hence, the mean average accuracy of the Faster R-CNN and Single-Shot Detector is found to be 98. 25% and 96 % respectively. The model is trained and evaluated for the character of the Amharic language. The research will continue to include the remaining words and sentence used in Amharic Sign Language to have a full- edged Sign Language recognition model to a complete system.


Sign in / Sign up

Export Citation Format

Share Document