An intelligent wearable to aid speech impaired people by detection of specific hand gestures using flex sensors

2019 ◽  
Author(s):  
Vivek Kumar ◽  
Vineet Shekhar ◽  
Vikrant Verma
Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


Author(s):  
D. Ivanko ◽  
D. Ryumin ◽  
A. Karpov

<p><strong>Abstract.</strong> Inability to use speech interfaces greatly limits the deaf and hearing impaired people in the possibility of human-machine interaction. To solve this problem and to increase the accuracy and reliability of the automatic Russian sign language recognition system it is proposed to use lip-reading in addition to hand gestures recognition. Deaf and hearing impaired people use sign language as the main way of communication in everyday life. Sign language is a structured form of hand gestures and lips movements involving visual motions and signs, which is used as a communication system. Since sign language includes not only hand gestures, but also lip movements that mimic vocalized pronunciation, it is of interest to investigate how accurately such a visual speech can be recognized by a lip-reading system, especially considering the fact that the visual speech of hearing impaired people is often characterized with hyper-articulation, which should potentially facilitate its recognition. For this purpose, thesaurus of Russian sign language (TheRusLan) collected in SPIIRAS in 2018–19 was used. The database consists of color optical FullHD video recordings of 13 native Russian sign language signers (11 females and 2 males) from “Pavlovsk boarding school for the hearing impaired”. Each of the signers demonstrated 164 phrases for 5 times. This work covers the initial stages of this research, including data collection, data labeling, region-of-interest detection and methods for informative features extraction. The results of this study can later be used to create assistive technologies for deaf or hearing impaired people.</p>


Individuals communicate with one another to pass on their thoughts to the general population around them. There are 2.78% of the total populations of India who can’t speak. Gesture based communication is really a mode of correspondence for the general population who are either deaf or deaf-mute. Ordinary individuals don't become familiar with the gesture based communication. It causes conveyance gap between deafdumb and normal people. The past system of this project involved using image processing concept. But the downside of these past frameworks are projects were non portable and excessively costly. The aim behind this work is to build up a framework for perceiving the gesture based communication, which provides interaction between people who are deaf-dumb and normal people, thereby diminishing the interaction gap between them. Generally hearing-impaired people use linguistic communication based on hand gestures with specific movements to represent the ideas to others. The proposed glove is an robotic gadget that interprets American Sign Language Standard into text or speech in order to evacuate the information transmission gap between the mute and the ordinary public. This glove has been actualized with the assistance of flex sensors, accelerometer, microcontroller (Arduino Nano) and the Bluetooth chip.


Communicating through hand gestures is one of the most common forms of non-verbal and visual communication adopted by speech impaired population all around the world. The problem existing at the moment is that most of the people are not able to comprehend hand gestures or convert them to the spoken language quick enough for the listener to understand. A large fraction of India’s population is speech impaired. In addition to this communication to sign language is not a very easy task. This problem demands a better solution which can assist speech impaired population conversation without any difficulties. As a result, reducing the communication gap for the speech impaired. This paper proposes an idea which will assist in removing or at least reducing this gap between the speech impaired and normal people. The research going on this area mostly focuses on image processing approaches. However, a cheaper and user-friendly approach has been used in this paper. The idea is to make a glove that can be worn by the speech impaired people which will further be used to convert the sign language into speech and text. Our prototype involves Arduino Uno as a microcontroller which is interfaced with flex sensors and accelerometer, gyroscopic sensor for reading the hand gestures. Furthermore, to perform better execution, we have incorporated an algorithm for better interpretation of data and therefore producing more accurate result. Thereafter, we use python to interface Arduino Uno with a microprocessor and finally converting into speech. The prototype has been calibrated in accordance with the ASL (American Sign Language)


2021 ◽  
Author(s):  
Mahyudin Ritonga ◽  
Rasha M.Abd El-Aziz ◽  
Varsha Dr. ◽  
Maulik Bader Alazzam ◽  
Fawaz Alassery ◽  
...  

Abstract Exceptional research activities have been endorsed by the Arabic Sign Language for recognizing gestures and hand signs utilizing the deep learning model. Sign languages refer to the gestures, which are utilized by hearing impaired people for communication. These gestures are complex for understanding by normal people. Due to variation of Arabic Sign Language (ArSL) from one territory to another territory or between countries, the recognition of Arabic Sign Language (ArSL) became an arduous research problem. The recognition of Arabic Sign Language has been learned and implemented utilizing multiple traditional and intelligent approaches and there were only less attempts made for enhancing the process with the help of deep learning networks. The proposed system here encapsulates a Convolutional Neural Network (CNN) based machine learning technique, which utilizes wearable sensors for recognition of the Arabic Sign Language (ArSL). The model suits to all local Arabic gestures, which are used by the hearing-impaired people of the local Arabic community. The proposed system has a reasonable and moderate accuracy. Initially a deep Convolutional network is built for feature extraction, which is extracted from the collected data by the wearable sensors. These sensors are used for recognizing accurately the 30 hand sign letters of the Arabic sign language. DG5-V hand gloves embedded with wearable sensors are used for capturing the hand movements in the dataset. The CNN approach is utilized for the classification purpose. The hand gestures of the Arabic sign language are the input and the vocalized speech is the output of the proposed system. The results achieved a recognition rate of 90%. The proposed system was found highly efficient for translating hand gestures of the Arabic Sign Language into speech and writing.


Author(s):  
Santosh Kumar J, Vamsi, Vinod, Madhusudhan and Tejas

A hand gesture is a non-verbal means of communication involving the motion of fingers to convey information. Hand gestures are used in sign language and are a way of communication for deaf and mute people and also implemented to control devices too. The purpose of gesture recognition in devices has always been providing the gap between the physical world and the digital world. The way humans interact among themselves with the digital world could be implemented via gestures using algorithms. Gestures can be tracked using gyroscope, accelerometers, and more as well. So, in this project we aim to provide an electronic method for hand gesture recognition that is cost-effective, this system makes use of flex sensors, ESP32 board. A flex sensor works on the principle of change in the internal resistance to detect the angle made by the user’s finger at any given time. The flexes made by hand in different combinations amount to a gesture and this gesture can be converted into signals or as a text display on the screen. A smart glove is designed which is equipped with custom-made flex sensors that detect the gestures and convert them to text and an ESP32 board, the component used to supplement the gestures detected by a flex sensor. This helps in identifying machines the human sign language and perform the task or identify a word through hand gestures and respond according to it.


2021 ◽  
Vol 1 (2) ◽  
pp. 88-101
Author(s):  
I Wayan Sukadana ◽  
I Nengah Agus Mulia Adnyana ◽  
Erwani Merry Sartika

This study aims to design and build a Sign Language Interpreter Device with Voice Output in the form of an ATMega328 Microcontroller-Based Voice Speaker Module so that in its implementation and later in designing this device the writer focuses on the translation of 16 words that have been predetermined in Indonesian Sign Language especially in Denpasar City by using a Flex Sensor and a Gyro Sensor based on the ATMega328 Microcontroller with Arduino IDE programming. This device is also equipped with a 4GB SD card memory for storing voice recordings, using an ATMega328 microcontroller, four analog Flex sensors, a Gyro sensor, a buzzer and an 8 ohm speaker, and using a 7.4 volt Li-Po battery. The application of this device is aimed for thehearing impaired people who fall into the adult category who can understand writing and understand sign language. The output of this device uses an MP3 player module that is already included in the Sign Language Interpreter Device. The flex sensor readings range from 998-1005 ADC (analog digital converter) in open conditions and the sensor ranges from 1006-10018 ADC in closed conditions. The reading for the gyro pitch (Y axis) ranges from -10º to 76º then on the reading of the gyro Roll (X axis) ranges from -100º to 90º.  Keywords: ATMega328 microcontroller; Buzzer; Flex Sensor; Gyro Sensor


Communicating through hand gestures is one of the most common forms of non-verbal and visual communication adopted by speech impaired population all around the world. The problem existing at the moment is that most of the people are not able to comprehend hand gestures or convert them to the spoken language quick enough for the listener to understand. A large fraction of India’s population is speech impaired. In addition to this communication to sign language is not a very easy task. This problem demands a better solution which can assist speech impaired population conversation without any difficulties. As a result, reducing the communication gap for the speech impaired. This paper proposes an idea which will assist in removing or at least reducing this gap between the speech impaired and normal people. The research going on this area mostly focuses on image processing approaches. However, a cheaper and user-friendly approach has been used in this paper. The idea is to make a glove that can be worn by the speech impaired people which will further be used to convert the sign language into speech and text. Our prototype involves Arduino Uno as a microcontroller which is interfaced with flex sensors and accelerometer, gyroscopic sensor for reading the hand gestures. Furthermore, to perform better execution, we have incorporated an algorithm for better interpretation of data and therefore producing more accurate result. Thereafter, we use python to interface Arduino Uno with a microprocessor and finally converting into speech. The prototype has been calibrated in accordance with the ASL (American Sign Language).


2013 ◽  
Author(s):  
Margaux Larre-Perez ◽  
Pierre Jacob ◽  
Therese Collins
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document