Evaluation of Chinese Sign Language Animation for Mammography Inspection of Hearing-Impaired People

Author(s):  
Ou Yang ◽  
Kazunari Morimoto ◽  
Noriaki Kuwahara
2021 ◽  
Vol 11 (8) ◽  
pp. 3439
Author(s):  
Debashis Das Chakladar ◽  
Pradeep Kumar ◽  
Shubham Mandal ◽  
Partha Pratim Roy ◽  
Masakazu Iwamura ◽  
...  

Sign language is a visual language for communication used by hearing-impaired people with the help of hand and finger movements. Indian Sign Language (ISL) is a well-developed and standard way of communication for hearing-impaired people living in India. However, other people who use spoken language always face difficulty while communicating with a hearing-impaired person due to lack of sign language knowledge. In this study, we have developed a 3D avatar-based sign language learning system that converts the input speech/text into corresponding sign movements for ISL. The system consists of three modules. Initially, the input speech is converted into an English sentence. Then, that English sentence is converted into the corresponding ISL sentence using the Natural Language Processing (NLP) technique. Finally, the motion of the 3D avatar is defined based on the ISL sentence. The translation module achieves a 10.50 SER (Sign Error Rate) score.


Author(s):  
Akib Khan ◽  
Rohan Bagde ◽  
Arish Sheikh ◽  
Asrar Ahmed ◽  
Rahib Khan ◽  
...  

India has the highest population of hearing-impaired people in the world, numbering 18 million. Only 0.25 per cent of these numbers presently have access to bilingual education where knowledge of sign language is primary and that of a local language. Providing a platform that will help the entire hearing impaired community, their interpreters, families, and educators to understand sign language, will go a long way to facilitate easier conversations and exchange of ideas, thereby enabling a more inclusive society.


2021 ◽  
Author(s):  
Ishika Godage ◽  
Ruvan Weerasignhe ◽  
Damitha Sandaruwan

It is no doubt that communication plays a vital role in human life. There is, however, a significant population of hearing-impaired people who use non-verbal techniques for communication, which a majority of the people cannot understand. The predominant of these techniques is based on sign language, the main communication protocol among hearing impaired people. In this research, we propose a method to bridge the communication gap between hearing impaired people and others, which translates signed gestures into text. Most existing solutions, based on technologies such as Kinect, Leap Motion, Computer vision, EMG and IMU try to recognize and translate individual signs of hearing impaired people. The few approaches to sentence-level sign language recognition suffer from not being user-friendly or even practical owing to the devices they use. The proposed system is designed to provide full freedom to the user to sign an uninterrupted full sentence at a time. For this purpose, we employ two Myo armbands for gesture-capturing. Using signal processing and supervised learning based on a vocabulary of 49 words and 346 sentences for training with a single signer, we were able to achieve 75-80% word-level accuracy and 45-50% sentence level accuracy using gestural (EMG) and spatial (IMU) features for our signer-dependent experiment.


The growth of technology has influenced development in various fields. Technology has helped people achieve their dreams over the past years. One such field that technology involves is aiding the hearing and speech impaired people. The obstruction between common individuals and individuals with hearing and language incapacities can be resolved by using the current technology to develop an environment such that the aforementioned easily communicate among one and other. ASL Interpreter aims to facilitate communication among the hearing and speech impaired individuals. This project mainly focuses on the development of software that can convert American Sign Language to Communicative English Language and vice-versa. This is accomplished via Image-Processing. The latter is a system that does a few activities on a picture, to acquire an improved picture or to extricate some valuable data from it. Image processing in this project is done by using MATLAB, software by MathWorks. The latter is programmed in a way that it captures the live image of the hand gesture. The captured gestures are put under the spotlight by being distinctively colored in contrast with the black background. The contrasted hand gesture will be delivered in the database as a binary equivalent of the location of each pixel and the interpreter would now link the binary value to its equivalent translation delivered in the database. This database shall be integrated into the mainframe image processing interface. The Image Processing toolbox, which is an inbuilt toolkit provided by MATLAB is used in the development of the software and Histogramic equivalents of the images are brought to the database and the extracted image will be converted to a histogram using the ‘imhist()’ function and would be compared with the same. The concluding phase of the project i.e. translation of speech to sign language is designed by matching the letter equivalent to the hand gesture in the database and displaying the result as images. The software will use a webcam to capture the hand gesture made by the user. This venture plans to facilitate the way toward learning gesture-based communication and supports hearing-impaired people to converse without trouble.


Sign language is a visual language that uses body postures and facial expressions. It is generally used by hearing-impaired people as a source of communication. According to the World Health Organization (WHO), around 466 million people (5% of the world population) are with hearing and speech impairment. Normal people generally do not understand this sign language and hence there is a communication gap between hearing-impaired and other people. Different phonemic scripts were developed such as HamNoSys notation that describes sign language using symbols. With the development in the field of artificial intelligence, we are now able to overcome the limitations of communication with people using different languages. Sign language translating system is the one that converts sign to text or speech whereas sign language generating system is the one that converts speech or text to sign language. Sign language generating systems were developed so that normal people can use this system to display signs to hearing-impaired people. This survey consists of a comparative study of approaches and techniques that are used to generate sign language. We have discussed general architecture and applications of the sign language generating system.


Sign language is the only method of communication for the hearing and speech impaired people around the world. Most of the speech and hearing-impaired people understand single sign language. Thus, there is an increasing demand for sign language interpreters. For regular people learning sign language is difficult, and for speech and hearing-impaired person, learning spoken language is impossible. There is a lot of research being done in the domain of automatic sign language recognition. Different methods such as, computer vision, data glove, depth sensors can be used to train a computer to interpret sign language. The interpretation is being done from sign to text, text to sign, speech to sign and sign to speech. Different countries use different sign languages, the signers of different sign languages are unable to communicate with each other. Analyzing the characteristic features of gestures provides insights about the sign language, some common features in sign languages gestures will help in designing a sign language recognition system. This type of system will help in reducing the communication gap between sign language users and spoken language users.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5843
Author(s):  
Ilias Papastratis ◽  
Christos Chatzikonstantinou ◽  
Dimitrios Konstantinidis ◽  
Kosmas Dimitropoulos ◽  
Petros Daras

AI technologies can play an important role in breaking down the communication barriers of deaf or hearing-impaired people with other communities, contributing significantly to their social inclusion. Recent advances in both sensing technologies and AI algorithms have paved the way for the development of various applications aiming at fulfilling the needs of deaf and hearing-impaired communities. To this end, this survey aims to provide a comprehensive review of state-of-the-art methods in sign language capturing, recognition, translation and representation, pinpointing their advantages and limitations. In addition, the survey presents a number of applications, while it discusses the main challenges in the field of sign language technologies. Future research direction are also proposed in order to assist prospective researchers towards further advancing the field.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6256
Author(s):  
Boon Giin Lee ◽  
Teak-Wei Chong ◽  
Wan-Young Chung

Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) have been conducted worldwide to reduce such barriers. However, this approach is restricted by the visual angle and highly affected by environmental factors. In addition, CV usually involves the use of machine learning, which requires collaboration of a team of experts and utilization of high-cost hardware utilities; this increases the application cost in real-world situations. Thus, this study aims to design and implement a smart wearable American Sign Language (ASL) interpretation system using deep learning, which applies sensor fusion that “fuses” six inertial measurement units (IMUs). The IMUs are attached to all fingertips and the back of the hand to recognize sign language gestures; thus, the proposed method is not restricted by the field of view. The study reveals that this model achieves an average recognition rate of 99.81% for dynamic ASL gestures. Moreover, the proposed ASL recognition system can be further integrated with ICT and IoT technology to provide a feasible solution to assist hearing-impaired people in communicating with others and improve their quality of life.


Sign in / Sign up

Export Citation Format

Share Document