scholarly journals Gesture To Speech Conversion using Flex sensors, MPU6050 and Python

Communicating through hand gestures is one of the most common forms of non-verbal and visual communication adopted by speech impaired population all around the world. The problem existing at the moment is that most of the people are not able to comprehend hand gestures or convert them to the spoken language quick enough for the listener to understand. A large fraction of India’s population is speech impaired. In addition to this communication to sign language is not a very easy task. This problem demands a better solution which can assist speech impaired population conversation without any difficulties. As a result, reducing the communication gap for the speech impaired. This paper proposes an idea which will assist in removing or at least reducing this gap between the speech impaired and normal people. The research going on this area mostly focuses on image processing approaches. However, a cheaper and user-friendly approach has been used in this paper. The idea is to make a glove that can be worn by the speech impaired people which will further be used to convert the sign language into speech and text. Our prototype involves Arduino Uno as a microcontroller which is interfaced with flex sensors and accelerometer, gyroscopic sensor for reading the hand gestures. Furthermore, to perform better execution, we have incorporated an algorithm for better interpretation of data and therefore producing more accurate result. Thereafter, we use python to interface Arduino Uno with a microprocessor and finally converting into speech. The prototype has been calibrated in accordance with the ASL (American Sign Language).

Communicating through hand gestures is one of the most common forms of non-verbal and visual communication adopted by speech impaired population all around the world. The problem existing at the moment is that most of the people are not able to comprehend hand gestures or convert them to the spoken language quick enough for the listener to understand. A large fraction of India’s population is speech impaired. In addition to this communication to sign language is not a very easy task. This problem demands a better solution which can assist speech impaired population conversation without any difficulties. As a result, reducing the communication gap for the speech impaired. This paper proposes an idea which will assist in removing or at least reducing this gap between the speech impaired and normal people. The research going on this area mostly focuses on image processing approaches. However, a cheaper and user-friendly approach has been used in this paper. The idea is to make a glove that can be worn by the speech impaired people which will further be used to convert the sign language into speech and text. Our prototype involves Arduino Uno as a microcontroller which is interfaced with flex sensors and accelerometer, gyroscopic sensor for reading the hand gestures. Furthermore, to perform better execution, we have incorporated an algorithm for better interpretation of data and therefore producing more accurate result. Thereafter, we use python to interface Arduino Uno with a microprocessor and finally converting into speech. The prototype has been calibrated in accordance with the ASL (American Sign Language)


2018 ◽  
Vol 14 (1) ◽  
pp. 75-82
Author(s):  
Agung Budi Prasetijo ◽  
Muhamad Y. Dias ◽  
Dania Eridani

Deaf or hard-of-hearing people have been using The American Sign Language (ASL) to communicate with others. Unfortunately, most of the people having normal hearing do not learn such a sign language; therefore, they do not understand persons with such disability. However, the rapid development of science and technology can facilitate people to translate body or part of the body formation more easily. This research is preceded with literature study surveying the need of sensors embedded in a glove. This research employs five flex sensors as well as accelerator and gyroscope to recognize ASL language having similar fingers formation. An Arduino Mega 2560 board as the central controller is employed to read the flex sensors’ output and process the information. With 1Sheeld module, the output of the interpreter is presented on a smartphone both in text and voice. The result of this research is a flex glove system capable of translating the ASL from the hand formation that can be seen and be heard. Limitations were found when translating sign for letter N and M as the accuracy reached only 60%; therefore, the total performance of this system to recognize letter A to Z is 96.9%.


Author(s):  
Basil Jose

Abstract: With the advancement of technology, we can implement a variety of ideas to serve mankind in numerous ways. Inspired by this, we have developed a smart hand glove system which will be able to help the people having hearing and speech disabilities. In the world of sound, for those without it, sign language is a powerful tool to make their voices heard. The American Sign Language (ASL) is the most frequently used sign language in the world, with some differences depending on the nation. We created a wearable wireless gesture decoder module in this project that can transform the basic set of ASL motions into alphabets and sentences. Our project utilizes a glove that houses a series of flex sensors on the metacarpal and interphalange joints of the fingers to detect the bending of fingers, through piezoresistive (change in electrical resistance when the semiconductor or metal is subjected to mechanical strain) effect. The glove is attached with an accelerometer as well, that helps to detect the hand movements. Simple classification algorithms from machine learning are then applied to translate the gestures into alphabets or words. Keywords: Arduino; MPU6050; Flex sensor; Machine learning; SVM classifier


2019 ◽  
Vol 10 (3) ◽  
pp. 60-73 ◽  
Author(s):  
Ravinder Ahuja ◽  
Daksh Jain ◽  
Deepanshu Sachdeva ◽  
Archit Garg ◽  
Chirag Rajput

Communicating through hand gestures with each other is simply called the language of signs. It is an acceptable language for communication among deaf and dumb people in this society. The society of the deaf and dumb admits a lot of obstacles in day to day life in communicating with their acquaintances. The most recent study done by the World Health Organization reports that very large section (around 360 million folks) present in the world have hearing loss, i.e. 5.3% of the earth's total population. This gives us a need for the invention of an automated system which converts hand gestures into meaningful words and sentences. The Convolutional Neural Network (CNN) is used on 24 hand signals of American Sign Language in order to enhance the ease of communication. OpenCV was used in order to follow up on further execution techniques like image preprocessing. The results demonstrated that CNN has an accuracy of 99.7% utilizing the database found on kaggle.com.


Author(s):  
Rachaell Nihalaani

Abstract: Sign Language is invaluable to hearing and speaking impaired people and is their only way of communicating among themselves. However, it has limitations with its reach as the rest of the people have no information regarding sign language interpretation. Sign language is communicated via hand gestures and visual modes and is therefore used by hearing and speaking impaired people to intercommunicate. These languages have alphabets and grammar of their own, which cannot be understood by people who have no knowledge about the specific symbols and rules. Thus, it has become essential for everyone to interpret, understand and communicate via sign language to overcome and alleviate the barriers of speech and communication. This can be tackled with the help of machine learning. This model is a Sign Language Interpreter that uses a dataset of images and interprets the sign language alphabets and sentences with 90.9% accuracy. For this paper, we have used an ASL (American Sign Language) Alphabet. We have used the CNN algorithm for this project. This paper ends with a summary of the model’s viability and its usefulness for interpretation of Sign Language. Keywords: Sign Language, Machine Learning, Interpretation model, Convoluted Neural Networks, American Sign Language


Author(s):  
Sarthak Sharma

Abstract: Sign language is one of the oldest and most natural form of language for communication, but since most people do not know sign language and interpreters are very difficult to come by we have come up with a real time method using neural networks for fingerspelling based American sign language. In our method, the hand is first passed through a filter and after the filter is applied the hand is passed through a classifier which predicts the class of the hand gestures.


2020 ◽  
Vol 8 (6) ◽  
pp. 4191-4194

People with the inability to speak, use sign language for communication. Ordinary people usually find it very difficult to communicate with mute people due to their lack of understanding of the universal sign language. This paper aims to provide a solution for this very problem through a device that uses an Arduino Uno board, some flex sensors and an Android application to facilitate interaction amongst the users. The flex sensors detect the movements and gestures of the wearer and based on the established conditions for the different values generated, respective messages are sent using a Global System for Mobile (GSM) Module to the user’s android device which translates the text message to speech. The GSM module also attempts to create parameters for gesture predictions by sending sensor inputs to a cloud-based server for future reference. The application is ever learning and continues to evolve to be more reliable by examining user behaviours at all times. The use of this device allows mute people to convert sign language to speech, thereby making it significantly easier to talk to others, especially those who do not know sign language. This device empowers mute people and opens them up to previously unattainable opportunities.


Author(s):  
Santosh Kumar J, Vamsi, Vinod, Madhusudhan and Tejas

A hand gesture is a non-verbal means of communication involving the motion of fingers to convey information. Hand gestures are used in sign language and are a way of communication for deaf and mute people and also implemented to control devices too. The purpose of gesture recognition in devices has always been providing the gap between the physical world and the digital world. The way humans interact among themselves with the digital world could be implemented via gestures using algorithms. Gestures can be tracked using gyroscope, accelerometers, and more as well. So, in this project we aim to provide an electronic method for hand gesture recognition that is cost-effective, this system makes use of flex sensors, ESP32 board. A flex sensor works on the principle of change in the internal resistance to detect the angle made by the user’s finger at any given time. The flexes made by hand in different combinations amount to a gesture and this gesture can be converted into signals or as a text display on the screen. A smart glove is designed which is equipped with custom-made flex sensors that detect the gestures and convert them to text and an ESP32 board, the component used to supplement the gestures detected by a flex sensor. This helps in identifying machines the human sign language and perform the task or identify a word through hand gestures and respond according to it.


2020 ◽  
Vol 4 (4) ◽  
pp. 20-27
Author(s):  
Md. Abdur Rahim ◽  
Jungpil Shin ◽  
Keun Soo Yun

Sign language (SL) recognition is intended to connect deaf people with the general population via a variety of perspectives, experiences, and skills that serve as a basis for the development of human-computer interaction. Hand gesture-based SL recognition encompasses a wide range of human capabilities and perspectives. The efficiency of hand gesture performance is still challenging due to the complexity of varying levels of illumination, diversity, multiple aspects, self-identifying parts, different shapes, sizes, and complex backgrounds. In this context, we present an American Sign Language alphabet recognition system that translates sign gestures into text and creates a meaningful sentence from continuously performed gestures. We propose a segmentation technique for hand gestures and present a convolutional neural network (CNN) based on the fusion of features. The input image is captured directly from a video via a low-cost device such as a webcam and is pre-processed by a filtering and segmentation technique, for example the Otsu method. Following this, a CNN is used to extract the features, which are then fused in a fully connected layer. To classify and recognize the sign gestures, a well-known classifier such as Softmax is used. A dataset is proposed for this work that contains only static images of hand gestures, which were collected in a laboratory environment. An analysis of the results shows that our proposed system achieves better recognition accuracy than other state-of-the-art systems.


Sign in / Sign up

Export Citation Format

Share Document