scholarly journals EM-Sign: A Non-Contact Recognition Method Based on 24 GHz Doppler Radar for Continuous Signs and Dialogues

Electronics ◽  
2020 ◽  
Vol 9 (10) ◽  
pp. 1577 ◽  
Author(s):  
Linting Ye ◽  
Shengchang Lan ◽  
Kang Zhang ◽  
Guiyuan Zhang

We studied continuous sign language recognition using Doppler radar sensors. Four signs in Chinese sign language and American sign language were captured and extracted by complex empirical mode decomposition (CEMD) to obtain spectrograms. Image sharpening was used to enhance the micro-Doppler signatures of the signs. To classify the different signs, we utilized an improved Yolov3-tiny network by replacing the framework with ResNet and fine-tuned the network in advance. This method can remove the epentheses from the training process. Experimental results revealed that the proposed method can surpass the state-of-the-art sign language recognition methods in continuous sign recognition with a precision of 0.924, a recall of 0.993, an F1-measure of 0.957 and a mean average precision (mAP) of 0.99. In addition, dialogue recognition in three daily conversation scenarios was performed and evaluated. The average word error rate (WER) was 0.235, 10% lower than in of other works. Our work provides an alternative form of sign language recognition and a new approach to simplify the training process and achieve a better continuous sign language recognition effect.

2020 ◽  
Vol 14 ◽  
Author(s):  
Vasu Mehra ◽  
Dhiraj Pandey ◽  
Aayush Rastogi ◽  
Aditya Singh ◽  
Harsh Preet Singh

Background:: People suffering from hearing and speaking disabilities have a few ways of communicating with other people. One of these is to communicate through the use of sign language. Objective:: Developing a system for sign language recognition becomes essential for deaf as well as a mute person. The recognition system acts as a translator between a disabled and an able person. This eliminates the hindrances in exchange of ideas. Most of the existing systems are very poorly designed with limited support for the needs of their day to day facilities. Methods:: The proposed system embedded with gesture recognition capability has been introduced here which extracts signs from a video sequence and displays them on screen. On the other hand, a speech to text as well as text to speech system is also introduced to further facilitate the grieved people. To get the best out of human computer relationship, the proposed solution consists of various cutting-edge technologies and Machine Learning based sign recognition models which have been trained by using Tensor Flow and Keras library. Result:: The proposed architecture works better than several gesture recognition techniques like background elimination and conversion to HSV because of sharply defined image provided to the model for classification. The results of testing indicate reliable recognition systems with high accuracy that includes most of the essential and necessary features for any deaf and dumb person in his/her day to day tasks. Conclusion:: It’s the need of current technological advances to develop reliable solutions which can be deployed to assist deaf and dumb people to adjust to normal life. Instead of focusing on a standalone technology, a plethora of them have been introduced in this proposed work. Proposed Sign Recognition System is based on feature extraction and classification. The trained model helps in identification of different gestures.


2020 ◽  
Vol 34 (10) ◽  
pp. 13781-13782
Author(s):  
Yuanqi Du ◽  
Nguyen Dang ◽  
Riley Wilkerson ◽  
Parth Pathak ◽  
Huzefa Rangwala ◽  
...  

In today's digital world, rapid technological advancements continue to lessen the burden of tasks for individuals. Among these tasks is communication across perceived language barriers. Indeed, increased attention has been drawn to American Sign Language (ASL) recognition in recent years. Camera-based and motion detection-based methods have been researched extensively; however, there remains a divide in communication between ASL users and non-users. Therefore, this research team proposes the use of a novel wireless sensor (Frequency-Modulated Continuous-Wave Radar) to help bridge the gap in communication. In short, this device sends out signals that detect the user's body positioning in space. These signals then reflect off the body and back to the sensor, developing thousands of cloud points per second, indicating where the body is positioned in space. These cloud points can then be examined for movement over multiple consecutive time frames using a cell division algorithm, ultimately showing how the body moves through space as it completes a single gesture or sentence. At the end of the project, 95% accuracy was achieved in one-object prediction as well as 80% accuracy on cross-object prediction with 30% other objects' data introduced on 19 commonly used gestures. There are 30 samples for each gesture per person from three persons.


Author(s):  
Prof. Namrata Ghuse

Gesture-based communication Recognition through innovation has been ignored idea even though an enormous local area can profit from it. There are more than 3% total population of the world who even can't speak or hear properly. Using hand Gesture-based communication, Every especially impaired people to communicate with each other or rest of world population. It is a way of correspondence for the rest of the world population who are neither speaking and hearing incompetency. Normal people even don't become intimate or close with sign language based communications. It's a reason become a gap between the especially impaired people & ordinary person. The previous systems of the project used to involve the concepts of image generation and emoji symbols. But the previous frameworks of a project are not affordable and not portable for the impaired person.The Main propaganda of a project has always been to interpret Indian Sign Language Standards and American Sign Language Standards and Convert gestures into voice and text, also assist the impaired person can interact with the other person from the remote location. This hand smart glove has been made with the set up with Gyroscope, Flex Sensor, ESP32 Microcontrollers/Micro bit, Accelerometer,25 LED Matrix Actuators/Output &, flex sensor, vibrator etc.


Sign in / Sign up

Export Citation Format

Share Document