scholarly journals A Mobile Application That Allows People Who Do Not Know Sign Language to Teach Hearing-Impaired People by Using Speech-to-Text Procedures

Author(s):  
Emre BİÇEK ◽  
M. Nuri ALMALI

This is about developing an android app for replicating the mechanism of hearing aid machine. Majority of the Hearing Impaired people cannot afford hearing aids due to higher cost of the Instruments. Similarly Audiometry Test for assessing the Deafness levels are also costly. Affordable Smartphone are available with majority of the Hearing Impaired people in Indian. So, we propose a Mobile Application which consists of the following three Features. This APP enables Ear phones of Phone to function as Hearing Aid for people with hearing disability. This APP converts speech to Text so that Hearing impaired people can know what other people are talking without using SIGN Language. This APP provides Pure-Tone AudiometryTest to assess level of Hearing Loss.


2021 ◽  
Vol 11 (8) ◽  
pp. 3439
Author(s):  
Debashis Das Chakladar ◽  
Pradeep Kumar ◽  
Shubham Mandal ◽  
Partha Pratim Roy ◽  
Masakazu Iwamura ◽  
...  

Sign language is a visual language for communication used by hearing-impaired people with the help of hand and finger movements. Indian Sign Language (ISL) is a well-developed and standard way of communication for hearing-impaired people living in India. However, other people who use spoken language always face difficulty while communicating with a hearing-impaired person due to lack of sign language knowledge. In this study, we have developed a 3D avatar-based sign language learning system that converts the input speech/text into corresponding sign movements for ISL. The system consists of three modules. Initially, the input speech is converted into an English sentence. Then, that English sentence is converted into the corresponding ISL sentence using the Natural Language Processing (NLP) technique. Finally, the motion of the 3D avatar is defined based on the ISL sentence. The translation module achieves a 10.50 SER (Sign Error Rate) score.


Author(s):  
Abdul Rahim Razalli ◽  
Nordin Mamat ◽  
Normah Razali ◽  
Mohd Hanafi Mohd Yasin ◽  
Modi Lakulu ◽  
...  

Author(s):  
Akib Khan ◽  
Rohan Bagde ◽  
Arish Sheikh ◽  
Asrar Ahmed ◽  
Rahib Khan ◽  
...  

India has the highest population of hearing-impaired people in the world, numbering 18 million. Only 0.25 per cent of these numbers presently have access to bilingual education where knowledge of sign language is primary and that of a local language. Providing a platform that will help the entire hearing impaired community, their interpreters, families, and educators to understand sign language, will go a long way to facilitate easier conversations and exchange of ideas, thereby enabling a more inclusive society.


2021 ◽  
Author(s):  
Ishika Godage ◽  
Ruvan Weerasignhe ◽  
Damitha Sandaruwan

It is no doubt that communication plays a vital role in human life. There is, however, a significant population of hearing-impaired people who use non-verbal techniques for communication, which a majority of the people cannot understand. The predominant of these techniques is based on sign language, the main communication protocol among hearing impaired people. In this research, we propose a method to bridge the communication gap between hearing impaired people and others, which translates signed gestures into text. Most existing solutions, based on technologies such as Kinect, Leap Motion, Computer vision, EMG and IMU try to recognize and translate individual signs of hearing impaired people. The few approaches to sentence-level sign language recognition suffer from not being user-friendly or even practical owing to the devices they use. The proposed system is designed to provide full freedom to the user to sign an uninterrupted full sentence at a time. For this purpose, we employ two Myo armbands for gesture-capturing. Using signal processing and supervised learning based on a vocabulary of 49 words and 346 sentences for training with a single signer, we were able to achieve 75-80% word-level accuracy and 45-50% sentence level accuracy using gestural (EMG) and spatial (IMU) features for our signer-dependent experiment.


The growth of technology has influenced development in various fields. Technology has helped people achieve their dreams over the past years. One such field that technology involves is aiding the hearing and speech impaired people. The obstruction between common individuals and individuals with hearing and language incapacities can be resolved by using the current technology to develop an environment such that the aforementioned easily communicate among one and other. ASL Interpreter aims to facilitate communication among the hearing and speech impaired individuals. This project mainly focuses on the development of software that can convert American Sign Language to Communicative English Language and vice-versa. This is accomplished via Image-Processing. The latter is a system that does a few activities on a picture, to acquire an improved picture or to extricate some valuable data from it. Image processing in this project is done by using MATLAB, software by MathWorks. The latter is programmed in a way that it captures the live image of the hand gesture. The captured gestures are put under the spotlight by being distinctively colored in contrast with the black background. The contrasted hand gesture will be delivered in the database as a binary equivalent of the location of each pixel and the interpreter would now link the binary value to its equivalent translation delivered in the database. This database shall be integrated into the mainframe image processing interface. The Image Processing toolbox, which is an inbuilt toolkit provided by MATLAB is used in the development of the software and Histogramic equivalents of the images are brought to the database and the extracted image will be converted to a histogram using the ‘imhist()’ function and would be compared with the same. The concluding phase of the project i.e. translation of speech to sign language is designed by matching the letter equivalent to the hand gesture in the database and displaying the result as images. The software will use a webcam to capture the hand gesture made by the user. This venture plans to facilitate the way toward learning gesture-based communication and supports hearing-impaired people to converse without trouble.


Sign language is a visual language that uses body postures and facial expressions. It is generally used by hearing-impaired people as a source of communication. According to the World Health Organization (WHO), around 466 million people (5% of the world population) are with hearing and speech impairment. Normal people generally do not understand this sign language and hence there is a communication gap between hearing-impaired and other people. Different phonemic scripts were developed such as HamNoSys notation that describes sign language using symbols. With the development in the field of artificial intelligence, we are now able to overcome the limitations of communication with people using different languages. Sign language translating system is the one that converts sign to text or speech whereas sign language generating system is the one that converts speech or text to sign language. Sign language generating systems were developed so that normal people can use this system to display signs to hearing-impaired people. This survey consists of a comparative study of approaches and techniques that are used to generate sign language. We have discussed general architecture and applications of the sign language generating system.


Sign in / Sign up

Export Citation Format

Share Document