scholarly journals Electronic Glove: A Teaching AID for the Hearing Impaired

Author(s):  
Ertie Abana ◽  
Kym Harris Bulauitan ◽  
Ravy Kim Vicente ◽  
Michelle Rafael ◽  
Jay Boy Flores

<p><span>Learning how to speak in order to communicate with others is part of growing up. Like a normal person, deaf and mutes also need to learn how to connect to the world they live in. For this purpose, an Electronic Glove or <br /> E-Glovewas developed as a teaching aid for the hearing impaired particularly children. E-Glove makes use ofthe American Sign Language (ASL) asthe basis for recognizing hand gestures. It was designed using flex sensors and an accelerometer to detect the degree of bend made by the fingers as well asa movement of the hand. E-Glove transmits the data received from the sensors wirelessly to a computer and then displays the letter or basic word that correspondsto a gesture made by the individual wearing it. E-Glove provides a simple, accurate, reliable, cheap, speedy gesture recognition and user-friendlyteaching aid for the instructors that are teaching sign language to the deaf and mute community.</span></p>

Author(s):  
Basil Jose

Abstract: With the advancement of technology, we can implement a variety of ideas to serve mankind in numerous ways. Inspired by this, we have developed a smart hand glove system which will be able to help the people having hearing and speech disabilities. In the world of sound, for those without it, sign language is a powerful tool to make their voices heard. The American Sign Language (ASL) is the most frequently used sign language in the world, with some differences depending on the nation. We created a wearable wireless gesture decoder module in this project that can transform the basic set of ASL motions into alphabets and sentences. Our project utilizes a glove that houses a series of flex sensors on the metacarpal and interphalange joints of the fingers to detect the bending of fingers, through piezoresistive (change in electrical resistance when the semiconductor or metal is subjected to mechanical strain) effect. The glove is attached with an accelerometer as well, that helps to detect the hand movements. Simple classification algorithms from machine learning are then applied to translate the gestures into alphabets or words. Keywords: Arduino; MPU6050; Flex sensor; Machine learning; SVM classifier


Author(s):  
François Grosjean

The author discovered American Sign Language (ASL) and the world of the deaf whilst in the United States. He helped set up a research program in the psycholinguistics of ASL and describes a few studies he did. He also edited, with Harlan Lane, a special issue of Langages on sign language, for French colleagues. The author then worked on the bilingualism and biculturalism of the deaf, and authored a text on the right of the deaf child to become bilingual. It has been translated into 30 different languages and is known the world over.


Author(s):  
Franc Solina ◽  
Slavko Krapez ◽  
Ales Jaklic ◽  
Vito Komac

Deaf people, as a marginal community, may have severe problems in communicating with hearing people. Usually, they have a lot of problems even with such—for hearing people—simple tasks as understanding the written language. However, deaf people are very skilled in using a sign language, which is their native language. A sign language is a set of signs or hand gestures. A gesture in a sign language equals a word in a written language. Similarly, a sentence in a written language equals a sequence of gestures in a sign language. In the distant past deaf people were discriminated and believed to be incapable of learning and thinking independently. Only after the year 1500 were the first attempts made to educate deaf children. An important breakthrough was the realization that hearing is not a prerequisite for understanding ideas. One of the most important early educators of the deaf and the first promoter of sign language was Charles Michel De L’Epée (1712-1789) in France. He founded the fist public school for deaf people. His teachings about sign language quickly spread all over the world. Like spoken languages, different sign languages and dialects evolved around the world. According to the National Association of the Deaf, the American Sign Language (ASL) is the third most frequently used language in the United States, after English and Spanish. ASL has more than 4,400 distinct signs. The Slovenian sign language (SSL), which is used in Slovenia and also serves as a case study sign language in this chapter, contains approximately 4,000 different gestures for common words. Signs require one or both hands for signing. Facial expressions which accompany signing are also important since they can modify the basic meaning of a hand gesture. To communicate proper nouns and obscure words, sign languages employ finger spelling. Since the majority of signing is with full words, signed conversation can proceed with the same pace as spoken conversation.


2019 ◽  
Vol 10 (3) ◽  
pp. 60-73 ◽  
Author(s):  
Ravinder Ahuja ◽  
Daksh Jain ◽  
Deepanshu Sachdeva ◽  
Archit Garg ◽  
Chirag Rajput

Communicating through hand gestures with each other is simply called the language of signs. It is an acceptable language for communication among deaf and dumb people in this society. The society of the deaf and dumb admits a lot of obstacles in day to day life in communicating with their acquaintances. The most recent study done by the World Health Organization reports that very large section (around 360 million folks) present in the world have hearing loss, i.e. 5.3% of the earth's total population. This gives us a need for the invention of an automated system which converts hand gestures into meaningful words and sentences. The Convolutional Neural Network (CNN) is used on 24 hand signals of American Sign Language in order to enhance the ease of communication. OpenCV was used in order to follow up on further execution techniques like image preprocessing. The results demonstrated that CNN has an accuracy of 99.7% utilizing the database found on kaggle.com.


Author(s):  
David Quinto-Pozos ◽  
Robert Adam

Language contact of various kinds is the norm in Deaf communities throughout the world, and this allows for exploration of the role of the different kinds of modality (be it spoken, signed or written, or a combination of these) and the channel of communication in language contact. Drawing its evidence largely from instances of American Sign Language (ASL) this chapter addresses and illustrates several of these themes: sign-speech contact, sign-writing contact, and sign-sign contact, examining instances of borrowing and bilingualism between some of these modalities, and compares these to contact between hearing users of spoken languages, specifically in this case American English.


Author(s):  
Akib Khan ◽  
Rohan Bagde ◽  
Arish Sheikh ◽  
Asrar Ahmed ◽  
Rahib Khan ◽  
...  

India has the highest population of hearing-impaired people in the world, numbering 18 million. Only 0.25 per cent of these numbers presently have access to bilingual education where knowledge of sign language is primary and that of a local language. Providing a platform that will help the entire hearing impaired community, their interpreters, families, and educators to understand sign language, will go a long way to facilitate easier conversations and exchange of ideas, thereby enabling a more inclusive society.


The growth of technology has influenced development in various fields. Technology has helped people achieve their dreams over the past years. One such field that technology involves is aiding the hearing and speech impaired people. The obstruction between common individuals and individuals with hearing and language incapacities can be resolved by using the current technology to develop an environment such that the aforementioned easily communicate among one and other. ASL Interpreter aims to facilitate communication among the hearing and speech impaired individuals. This project mainly focuses on the development of software that can convert American Sign Language to Communicative English Language and vice-versa. This is accomplished via Image-Processing. The latter is a system that does a few activities on a picture, to acquire an improved picture or to extricate some valuable data from it. Image processing in this project is done by using MATLAB, software by MathWorks. The latter is programmed in a way that it captures the live image of the hand gesture. The captured gestures are put under the spotlight by being distinctively colored in contrast with the black background. The contrasted hand gesture will be delivered in the database as a binary equivalent of the location of each pixel and the interpreter would now link the binary value to its equivalent translation delivered in the database. This database shall be integrated into the mainframe image processing interface. The Image Processing toolbox, which is an inbuilt toolkit provided by MATLAB is used in the development of the software and Histogramic equivalents of the images are brought to the database and the extracted image will be converted to a histogram using the ‘imhist()’ function and would be compared with the same. The concluding phase of the project i.e. translation of speech to sign language is designed by matching the letter equivalent to the hand gesture in the database and displaying the result as images. The software will use a webcam to capture the hand gesture made by the user. This venture plans to facilitate the way toward learning gesture-based communication and supports hearing-impaired people to converse without trouble.


Author(s):  
Victoria Adewale ◽  
Adejoke Olamiti

Introduction: Communication with the hearing impaired ( deaf/mute) people is a great challenge in our society today; this can be attributed to the fact that their means of communication (Sign Language or hand gestures at a local level) requires an interpreter at every instance. Conversion of images to text as well as speech can be of great benefit to the non-hearing impaired and hearing impaired people (the deaf/mute) from circadian interaction with images. To effectively achieve this, a sign language (ASL – American Sign Language) image to text as well as speech conversion was aimed at in this research. Methodology: The techniques of image segmentation and feature detection played a crucial role in implementing this system. We formulate the interaction between image segmentation and object recognition in the framework of FAST and SURF algorithms. The system goes through various phases such as data capturing using KINECT sensor, image segmentation, feature detection and extraction from ROI, supervised and unsupervised classification of images with K-Nearest Neighbour (KNN)-algorithms and text-to-speech (TTS) conversion. The combination FAST and SURF with a KNN of 10 also showed that unsupervised learning classification could determine the best matched feature from the existing database. In turn, the best match was converted to text as well as speech. Result: The introduced system achieved a 78% accuracy of unsupervised feature learning. Conclusion: The success of this work can be attributed to the effective classification that has improved the unsupervised feature learning of different images. The pre-determination of the ROI of each image using SURF and FAST, has demonstrated the ability of the proposed algorithm to limit image modelling to relevant region within the image.


Sign language is a visual language that uses body postures and facial expressions. It is generally used by hearing-impaired people as a source of communication. According to the World Health Organization (WHO), around 466 million people (5% of the world population) are with hearing and speech impairment. Normal people generally do not understand this sign language and hence there is a communication gap between hearing-impaired and other people. Different phonemic scripts were developed such as HamNoSys notation that describes sign language using symbols. With the development in the field of artificial intelligence, we are now able to overcome the limitations of communication with people using different languages. Sign language translating system is the one that converts sign to text or speech whereas sign language generating system is the one that converts speech or text to sign language. Sign language generating systems were developed so that normal people can use this system to display signs to hearing-impaired people. This survey consists of a comparative study of approaches and techniques that are used to generate sign language. We have discussed general architecture and applications of the sign language generating system.


Sign in / Sign up

Export Citation Format

Share Document