scholarly journals Smart Hand Glove for Hearing and Speech Impaired

Author(s):  
Basil Jose

Abstract: With the advancement of technology, we can implement a variety of ideas to serve mankind in numerous ways. Inspired by this, we have developed a smart hand glove system which will be able to help the people having hearing and speech disabilities. In the world of sound, for those without it, sign language is a powerful tool to make their voices heard. The American Sign Language (ASL) is the most frequently used sign language in the world, with some differences depending on the nation. We created a wearable wireless gesture decoder module in this project that can transform the basic set of ASL motions into alphabets and sentences. Our project utilizes a glove that houses a series of flex sensors on the metacarpal and interphalange joints of the fingers to detect the bending of fingers, through piezoresistive (change in electrical resistance when the semiconductor or metal is subjected to mechanical strain) effect. The glove is attached with an accelerometer as well, that helps to detect the hand movements. Simple classification algorithms from machine learning are then applied to translate the gestures into alphabets or words. Keywords: Arduino; MPU6050; Flex sensor; Machine learning; SVM classifier

Author(s):  
Rachaell Nihalaani

Abstract: Sign Language is invaluable to hearing and speaking impaired people and is their only way of communicating among themselves. However, it has limitations with its reach as the rest of the people have no information regarding sign language interpretation. Sign language is communicated via hand gestures and visual modes and is therefore used by hearing and speaking impaired people to intercommunicate. These languages have alphabets and grammar of their own, which cannot be understood by people who have no knowledge about the specific symbols and rules. Thus, it has become essential for everyone to interpret, understand and communicate via sign language to overcome and alleviate the barriers of speech and communication. This can be tackled with the help of machine learning. This model is a Sign Language Interpreter that uses a dataset of images and interprets the sign language alphabets and sentences with 90.9% accuracy. For this paper, we have used an ASL (American Sign Language) Alphabet. We have used the CNN algorithm for this project. This paper ends with a summary of the model’s viability and its usefulness for interpretation of Sign Language. Keywords: Sign Language, Machine Learning, Interpretation model, Convoluted Neural Networks, American Sign Language


2018 ◽  
Vol 14 (1) ◽  
pp. 75-82
Author(s):  
Agung Budi Prasetijo ◽  
Muhamad Y. Dias ◽  
Dania Eridani

Deaf or hard-of-hearing people have been using The American Sign Language (ASL) to communicate with others. Unfortunately, most of the people having normal hearing do not learn such a sign language; therefore, they do not understand persons with such disability. However, the rapid development of science and technology can facilitate people to translate body or part of the body formation more easily. This research is preceded with literature study surveying the need of sensors embedded in a glove. This research employs five flex sensors as well as accelerator and gyroscope to recognize ASL language having similar fingers formation. An Arduino Mega 2560 board as the central controller is employed to read the flex sensors’ output and process the information. With 1Sheeld module, the output of the interpreter is presented on a smartphone both in text and voice. The result of this research is a flex glove system capable of translating the ASL from the hand formation that can be seen and be heard. Limitations were found when translating sign for letter N and M as the accuracy reached only 60%; therefore, the total performance of this system to recognize letter A to Z is 96.9%.


Communicating through hand gestures is one of the most common forms of non-verbal and visual communication adopted by speech impaired population all around the world. The problem existing at the moment is that most of the people are not able to comprehend hand gestures or convert them to the spoken language quick enough for the listener to understand. A large fraction of India’s population is speech impaired. In addition to this communication to sign language is not a very easy task. This problem demands a better solution which can assist speech impaired population conversation without any difficulties. As a result, reducing the communication gap for the speech impaired. This paper proposes an idea which will assist in removing or at least reducing this gap between the speech impaired and normal people. The research going on this area mostly focuses on image processing approaches. However, a cheaper and user-friendly approach has been used in this paper. The idea is to make a glove that can be worn by the speech impaired people which will further be used to convert the sign language into speech and text. Our prototype involves Arduino Uno as a microcontroller which is interfaced with flex sensors and accelerometer, gyroscopic sensor for reading the hand gestures. Furthermore, to perform better execution, we have incorporated an algorithm for better interpretation of data and therefore producing more accurate result. Thereafter, we use python to interface Arduino Uno with a microprocessor and finally converting into speech. The prototype has been calibrated in accordance with the ASL (American Sign Language)


2019 ◽  
Vol 8 (3) ◽  
pp. 2128-2137

There are nearly 15 million people around the world who have difficulty in speaking or communicating. Their only way of communication is through sign language. Hand gesture is one of the methods used in sign language for non-verbal communication. It is most commonly used by deaf & dumb people who have hearing or speech problems to communicate among themselves or with normal people. There are many recognized sign language standards that have been defined such as ASL(American Sign Language), IPSL(Indo Pakistan Sign Language), etc., which define what sign means what. ASL is the most widely used sign language by the deaf and dumb community. The deaf and dumb use sign language to communicate among themselves with the knowledge of the standard sign language. But they can’t communicate with the rest of the world as most of the people are unaware of the existence and the usage of the sign language. This method aims to remove this communication barrier between the disabled and the rest of the world by recognizing and translating the hand gestures and convert it into speech


Author(s):  
Ertie Abana ◽  
Kym Harris Bulauitan ◽  
Ravy Kim Vicente ◽  
Michelle Rafael ◽  
Jay Boy Flores

<p><span>Learning how to speak in order to communicate with others is part of growing up. Like a normal person, deaf and mutes also need to learn how to connect to the world they live in. For this purpose, an Electronic Glove or <br /> E-Glovewas developed as a teaching aid for the hearing impaired particularly children. E-Glove makes use ofthe American Sign Language (ASL) asthe basis for recognizing hand gestures. It was designed using flex sensors and an accelerometer to detect the degree of bend made by the fingers as well asa movement of the hand. E-Glove transmits the data received from the sensors wirelessly to a computer and then displays the letter or basic word that correspondsto a gesture made by the individual wearing it. E-Glove provides a simple, accurate, reliable, cheap, speedy gesture recognition and user-friendlyteaching aid for the instructors that are teaching sign language to the deaf and mute community.</span></p>


Communicating through hand gestures is one of the most common forms of non-verbal and visual communication adopted by speech impaired population all around the world. The problem existing at the moment is that most of the people are not able to comprehend hand gestures or convert them to the spoken language quick enough for the listener to understand. A large fraction of India’s population is speech impaired. In addition to this communication to sign language is not a very easy task. This problem demands a better solution which can assist speech impaired population conversation without any difficulties. As a result, reducing the communication gap for the speech impaired. This paper proposes an idea which will assist in removing or at least reducing this gap between the speech impaired and normal people. The research going on this area mostly focuses on image processing approaches. However, a cheaper and user-friendly approach has been used in this paper. The idea is to make a glove that can be worn by the speech impaired people which will further be used to convert the sign language into speech and text. Our prototype involves Arduino Uno as a microcontroller which is interfaced with flex sensors and accelerometer, gyroscopic sensor for reading the hand gestures. Furthermore, to perform better execution, we have incorporated an algorithm for better interpretation of data and therefore producing more accurate result. Thereafter, we use python to interface Arduino Uno with a microprocessor and finally converting into speech. The prototype has been calibrated in accordance with the ASL (American Sign Language).


Author(s):  
François Grosjean

The author discovered American Sign Language (ASL) and the world of the deaf whilst in the United States. He helped set up a research program in the psycholinguistics of ASL and describes a few studies he did. He also edited, with Harlan Lane, a special issue of Langages on sign language, for French colleagues. The author then worked on the bilingualism and biculturalism of the deaf, and authored a text on the right of the deaf child to become bilingual. It has been translated into 30 different languages and is known the world over.


Author(s):  
Franc Solina ◽  
Slavko Krapez ◽  
Ales Jaklic ◽  
Vito Komac

Deaf people, as a marginal community, may have severe problems in communicating with hearing people. Usually, they have a lot of problems even with such—for hearing people—simple tasks as understanding the written language. However, deaf people are very skilled in using a sign language, which is their native language. A sign language is a set of signs or hand gestures. A gesture in a sign language equals a word in a written language. Similarly, a sentence in a written language equals a sequence of gestures in a sign language. In the distant past deaf people were discriminated and believed to be incapable of learning and thinking independently. Only after the year 1500 were the first attempts made to educate deaf children. An important breakthrough was the realization that hearing is not a prerequisite for understanding ideas. One of the most important early educators of the deaf and the first promoter of sign language was Charles Michel De L’Epée (1712-1789) in France. He founded the fist public school for deaf people. His teachings about sign language quickly spread all over the world. Like spoken languages, different sign languages and dialects evolved around the world. According to the National Association of the Deaf, the American Sign Language (ASL) is the third most frequently used language in the United States, after English and Spanish. ASL has more than 4,400 distinct signs. The Slovenian sign language (SSL), which is used in Slovenia and also serves as a case study sign language in this chapter, contains approximately 4,000 different gestures for common words. Signs require one or both hands for signing. Facial expressions which accompany signing are also important since they can modify the basic meaning of a hand gesture. To communicate proper nouns and obscure words, sign languages employ finger spelling. Since the majority of signing is with full words, signed conversation can proceed with the same pace as spoken conversation.


2019 ◽  
Vol 10 (3) ◽  
pp. 60-73 ◽  
Author(s):  
Ravinder Ahuja ◽  
Daksh Jain ◽  
Deepanshu Sachdeva ◽  
Archit Garg ◽  
Chirag Rajput

Communicating through hand gestures with each other is simply called the language of signs. It is an acceptable language for communication among deaf and dumb people in this society. The society of the deaf and dumb admits a lot of obstacles in day to day life in communicating with their acquaintances. The most recent study done by the World Health Organization reports that very large section (around 360 million folks) present in the world have hearing loss, i.e. 5.3% of the earth's total population. This gives us a need for the invention of an automated system which converts hand gestures into meaningful words and sentences. The Convolutional Neural Network (CNN) is used on 24 hand signals of American Sign Language in order to enhance the ease of communication. OpenCV was used in order to follow up on further execution techniques like image preprocessing. The results demonstrated that CNN has an accuracy of 99.7% utilizing the database found on kaggle.com.


Author(s):  
David Quinto-Pozos ◽  
Robert Adam

Language contact of various kinds is the norm in Deaf communities throughout the world, and this allows for exploration of the role of the different kinds of modality (be it spoken, signed or written, or a combination of these) and the channel of communication in language contact. Drawing its evidence largely from instances of American Sign Language (ASL) this chapter addresses and illustrates several of these themes: sign-speech contact, sign-writing contact, and sign-sign contact, examining instances of borrowing and bilingualism between some of these modalities, and compares these to contact between hearing users of spoken languages, specifically in this case American English.


Sign in / Sign up

Export Citation Format

Share Document