scholarly journals Gesture Recognition of Sign Language Alphabet Using a Magnetic Positioning System

2021 ◽  
Vol 11 (12) ◽  
pp. 5594
Author(s):  
Matteo Rinalduzzi ◽  
Alessio De Angelis ◽  
Francesco Santoni ◽  
Emanuele Buchicchio ◽  
Antonio Moschitta ◽  
...  

Hand gesture recognition is a crucial task for the automated translation of sign language, which enables communication for the deaf. This work proposes the usage of a magnetic positioning system for recognizing the static gestures associated with the sign language alphabet. In particular, a magnetic positioning system, which is comprised of several wearable transmitting nodes, measures the 3D position and orientation of the fingers within an operating volume of about 30 × 30 × 30 cm, where receiving nodes are placed at known positions. Measured position data are then processed by a machine learning classification algorithm. The proposed system and classification method are validated by experimental tests. Results show that the proposed approach has good generalization properties and provides a classification accuracy of approximately 97% on 24 alphabet letters. Thus, the feasibility of the proposed gesture recognition system for the task of automated translation of the sign language alphabet for fingerspelling is proven.

ACTA IMEKO ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 97
Author(s):  
Emanuele Buchicchio ◽  
Francesco Santoni ◽  
Alessio De Angelis ◽  
Antonio Moschitta ◽  
Paolo Carbone

<p class="Abstract"><span lang="EN-US">Gesture recognition is a fundamental step to enable efficient communication for the deaf through the automated translation of sign language. This work proposes the usage of a high-precision magnetic positioning system for 3D positioning and orientation tracking of the fingers and hands palm. The gesture is reconstructed by the MagIK (magnetic and inverse kinematics) method and then processed by a deep learning gesture classification model trained to recognize the gestures associated with the sign language alphabet. Results confirm the limits of vision-based systems and show that the proposed method based on hand skeleton reconstruction has good generalization properties. The proposed system, which combines sensor-based gesture acquisition and deep learning techniques for gesture recognition, provides a 100% classification accuracy, signer independent, after a few hours of training using transfer learning technique on well-known ResNet CNN architecture. The proposed classification model training method can be applied to other sensor-based gesture tracking systems and other applications, regardless of the specific data acquisition technology.</span></p>


2020 ◽  
Vol 14 ◽  
Author(s):  
Vasu Mehra ◽  
Dhiraj Pandey ◽  
Aayush Rastogi ◽  
Aditya Singh ◽  
Harsh Preet Singh

Background:: People suffering from hearing and speaking disabilities have a few ways of communicating with other people. One of these is to communicate through the use of sign language. Objective:: Developing a system for sign language recognition becomes essential for deaf as well as a mute person. The recognition system acts as a translator between a disabled and an able person. This eliminates the hindrances in exchange of ideas. Most of the existing systems are very poorly designed with limited support for the needs of their day to day facilities. Methods:: The proposed system embedded with gesture recognition capability has been introduced here which extracts signs from a video sequence and displays them on screen. On the other hand, a speech to text as well as text to speech system is also introduced to further facilitate the grieved people. To get the best out of human computer relationship, the proposed solution consists of various cutting-edge technologies and Machine Learning based sign recognition models which have been trained by using Tensor Flow and Keras library. Result:: The proposed architecture works better than several gesture recognition techniques like background elimination and conversion to HSV because of sharply defined image provided to the model for classification. The results of testing indicate reliable recognition systems with high accuracy that includes most of the essential and necessary features for any deaf and dumb person in his/her day to day tasks. Conclusion:: It’s the need of current technological advances to develop reliable solutions which can be deployed to assist deaf and dumb people to adjust to normal life. Instead of focusing on a standalone technology, a plethora of them have been introduced in this proposed work. Proposed Sign Recognition System is based on feature extraction and classification. The trained model helps in identification of different gestures.


Author(s):  
Gayathri. R ◽  
K. Sheela Sobana Rani ◽  
R. Lavanya

Silent speakers face a lot of problems when it comes to communicate their thoughts and views. Furthermore, only few people know the sign language of these silent speakers. They tend to feel awkward to take part any exercises with the typical individuals. They require gesture based communication mediators for their interchanges. The solution to this problem is to provide them a better way to take their message across, “Smart Finger Gesture Recognition System for Silent Speakers” which has been proposed. Instead of using sign language, gesture recognition is done with the help of finger movements. The system consists of data glove, flex sensors, raspberry pi. The flex sensors are fitted on the data gloves and it is used to recognize the finger gestures. Then the ADC module is used to convert the analog values into digital form. After signal conversion, the value is given to Raspberry Pi 3, and it converts the signals into audio output as well as text format using software tool. The proposed framework limits correspondence boundary between moronic individuals and ordinary individuals. Therefore, the recognized finger gestures are conveyed into speech and text so that the normal people can easily communicate with dumb people.


Sign in / Sign up

Export Citation Format

Share Document