Technological Aids for Deaf and Mute in Modern World

2020 ◽  
Vol 14 ◽  
Author(s):  
Vasu Mehra ◽  
Dhiraj Pandey ◽  
Aayush Rastogi ◽  
Aditya Singh ◽  
Harsh Preet Singh

Background:: People suffering from hearing and speaking disabilities have a few ways of communicating with other people. One of these is to communicate through the use of sign language. Objective:: Developing a system for sign language recognition becomes essential for deaf as well as a mute person. The recognition system acts as a translator between a disabled and an able person. This eliminates the hindrances in exchange of ideas. Most of the existing systems are very poorly designed with limited support for the needs of their day to day facilities. Methods:: The proposed system embedded with gesture recognition capability has been introduced here which extracts signs from a video sequence and displays them on screen. On the other hand, a speech to text as well as text to speech system is also introduced to further facilitate the grieved people. To get the best out of human computer relationship, the proposed solution consists of various cutting-edge technologies and Machine Learning based sign recognition models which have been trained by using Tensor Flow and Keras library. Result:: The proposed architecture works better than several gesture recognition techniques like background elimination and conversion to HSV because of sharply defined image provided to the model for classification. The results of testing indicate reliable recognition systems with high accuracy that includes most of the essential and necessary features for any deaf and dumb person in his/her day to day tasks. Conclusion:: It’s the need of current technological advances to develop reliable solutions which can be deployed to assist deaf and dumb people to adjust to normal life. Instead of focusing on a standalone technology, a plethora of them have been introduced in this proposed work. Proposed Sign Recognition System is based on feature extraction and classification. The trained model helps in identification of different gestures.

2022 ◽  
Vol 10 (1) ◽  
pp. 0-0

Developing a system for sign language recognition becomes essential for the deaf as well as a mute person. The recognition system acts as a translator between a disabled and an able person. This eliminates the hindrances in the exchange of ideas. Most of the existing systems are very poorly designed with limited support for the needs of their day to day facilities. The proposed system embedded with gesture recognition capability has been introduced here which extracts signs from a video sequence and displays them on screen. On the other hand, a speech to text as well as text to speech system is also introduced to further facilitate the grieved people. To get the best out of a human-computer relationship, the proposed solution consists of various cutting-edge technologies and Machine Learning based sign recognition models that have been trained by using TensorFlow and Keras library. The proposed architecture works better than several gesture recognition techniques like background elimination and conversion to HSV


2021 ◽  
Vol 47 (2) ◽  
pp. 769-778
Author(s):  
Isack Bulugu

This paper presents a sign language recognition system based on color stream and skeleton points. Several approaches have been established to address sign language recognition problems. However, most of the previous approaches still have poor recognition accuracy. The proposed approach uses Kinect sensor based on color stream and skeleton points from the depth stream to improved recognition accuracy. Techniques within this approach use hand trajectories and hand shapes in combating sign recognition challenges. Therefore, for a particular sign a representative feature vector is extracted, which consists of hand trajectories and hand shapes. A sparse dictionary learning algorithm, Label Consistent K-SVD (LC-KSVD) is applied to obtain a discriminative dictionary. Based on that, the system was further developed to a new classification approach for better results. The proposed system was fairly evaluated based on 21 sign words including one-handed signs and two-handed signs. It was observed that the proposed system gets high recognition accuracy of 98.25%, and obtained an average accuracy of 95.34% for signer independent recognition. Keywords: Sign language, Color stream, Skeleton points, Kinect sensor, Discriminative dictionary.


Author(s):  
Mohit Panwar ◽  
Rohit Pandey ◽  
Rohan Singla ◽  
Kavita Saxena

Every day we see many people, who are facing illness like deaf, dumb etc. There are not as many technologies which help them to interact with each other. They face difficulty in interacting with others. Sign language is used by deaf and hard hearing people to exchange information between their own community and with other people. Computer recognition of sign language deals from sign gesture acquisition and continues till text/speech generation. Sign gestures can be classified as static and dynamic. However static gesture recognition is simpler than dynamic gesture recognition but both recognition systems are important to the human community. The ASL American sign language recognition steps are described in this survey. There are not as many technologies which help them to interact with each other. They face difficulty in interacting with others. Image classification and machine learning can be used to help computers recognize sign language, which could then be interpreted by other people. Earlier we have Glove-based method in which the person has to wear a hardware glove, while the hand movements are getting captured. It seems a bit uncomfortable for practical use. Here we use visual based method. Convolutional neural networks and mobile ssd model have been employed in this paper to recognize sign language gestures. Preprocessing was performed on the images, which then served as the cleaned input. Tensor flow is used for training of images. A system will be developed which serves as a tool for sign language detection. Tensor flow is used for training of images. Keywords: ASL recognition system, convolutional neural network (CNNs), classification, real time, tensor flow


The aim is to present a real time system for hand gesture recognition on the basis of detection of some meaningful shape based feature like orientation, center of mass, status of fingers in term of raised or folded fingers of hand and their respective location in image. Hand gesture Recognition System has various real time applications in natural, innovative, user friendly way of how to interact with the computer which has more facilities that are familiar to us. Gesture recognition has a wide area of application including Human machine interaction, sign language, game technology robotics etc are some of the areas where Gesture recognition can be applied. More specifically hand gesture is used as a signal or input means given to the computer especially by disabled person. Being an interesting part of the human and computer interaction hand gesture recognition is needed for real life application, but complex of structures presents in human hand has a lot of challenges for being tracked and extracted. Making use of computer vision algorithms and gesture recognition techniques will result in developing low-cost interface devices using hand gestures for interacting with objects in virtual environment. SVM (support vector machine) and efficient feature extraction technique is presented for hand gesture recognition. This method deals with the dynamic aspects of hand gesture recognition system.


2019 ◽  
Vol 7 (2) ◽  
pp. 43
Author(s):  
MALHOTRA POOJA ◽  
K. MANIAR CHIRAG ◽  
V. SANKPAL NIKHIL ◽  
R. THAKKAR HARDIK ◽  
◽  
...  

IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 59612-59627
Author(s):  
Mohamed A. Bencherif ◽  
Mohammed Algabri ◽  
Mohamed A. Mekhtiche ◽  
Mohammed Faisal ◽  
Mansour Alsulaiman ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 4025
Author(s):  
Zhanjun Hao ◽  
Yu Duan ◽  
Xiaochao Dang ◽  
Yang Liu ◽  
Daiyang Zhang

In recent years, with the development of wireless sensing technology and the widespread popularity of WiFi devices, human perception based on WiFi has become possible, and gesture recognition has become an active topic in the field of human-computer interaction. As a kind of gesture, sign language is widely used in life. The establishment of an effective sign language recognition system can help people with aphasia and hearing impairment to better interact with the computer and facilitate their daily life. For this reason, this paper proposes a contactless fine-grained gesture recognition method using Channel State Information (CSI), namely Wi-SL. This method uses a commercial WiFi device to establish the correlation mapping between the amplitude and phase difference information of the subcarrier level in the wireless signal and the sign language action, without requiring the user to wear any device. We combine an efficient denoising method to filter environmental interference with an effective selection of optimal subcarriers to reduce the computational cost of the system. We also use K-means combined with a Bagging algorithm to optimize the Support Vector Machine (SVM) classification (KSB) model to enhance the classification of sign language action data. We implemented the algorithms and evaluated them for three different scenarios. The experimental results show that the average accuracy of Wi-SL gesture recognition can reach 95.8%, which realizes device-free, non-invasive, high-precision sign language gesture recognition.


Sign in / Sign up

Export Citation Format

Share Document