scholarly journals Machine Learning Approach for Gesture Based Arabic Sign Language Recognition for Impaired People

Author(s):  
Mahyudin Ritonga ◽  
Rasha M.Abd El-Aziz ◽  
Varsha Dr. ◽  
Maulik Bader Alazzam ◽  
Fawaz Alassery ◽  
...  

Abstract Exceptional research activities have been endorsed by the Arabic Sign Language for recognizing gestures and hand signs utilizing the deep learning model. Sign languages refer to the gestures, which are utilized by hearing impaired people for communication. These gestures are complex for understanding by normal people. Due to variation of Arabic Sign Language (ArSL) from one territory to another territory or between countries, the recognition of Arabic Sign Language (ArSL) became an arduous research problem. The recognition of Arabic Sign Language has been learned and implemented utilizing multiple traditional and intelligent approaches and there were only less attempts made for enhancing the process with the help of deep learning networks. The proposed system here encapsulates a Convolutional Neural Network (CNN) based machine learning technique, which utilizes wearable sensors for recognition of the Arabic Sign Language (ArSL). The model suits to all local Arabic gestures, which are used by the hearing-impaired people of the local Arabic community. The proposed system has a reasonable and moderate accuracy. Initially a deep Convolutional network is built for feature extraction, which is extracted from the collected data by the wearable sensors. These sensors are used for recognizing accurately the 30 hand sign letters of the Arabic sign language. DG5-V hand gloves embedded with wearable sensors are used for capturing the hand movements in the dataset. The CNN approach is utilized for the classification purpose. The hand gestures of the Arabic sign language are the input and the vocalized speech is the output of the proposed system. The results achieved a recognition rate of 90%. The proposed system was found highly efficient for translating hand gestures of the Arabic Sign Language into speech and writing.

Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6256
Author(s):  
Boon Giin Lee ◽  
Teak-Wei Chong ◽  
Wan-Young Chung

Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) have been conducted worldwide to reduce such barriers. However, this approach is restricted by the visual angle and highly affected by environmental factors. In addition, CV usually involves the use of machine learning, which requires collaboration of a team of experts and utilization of high-cost hardware utilities; this increases the application cost in real-world situations. Thus, this study aims to design and implement a smart wearable American Sign Language (ASL) interpretation system using deep learning, which applies sensor fusion that “fuses” six inertial measurement units (IMUs). The IMUs are attached to all fingertips and the back of the hand to recognize sign language gestures; thus, the proposed method is not restricted by the field of view. The study reveals that this model achieves an average recognition rate of 99.81% for dynamic ASL gestures. Moreover, the proposed ASL recognition system can be further integrated with ICT and IoT technology to provide a feasible solution to assist hearing-impaired people in communicating with others and improve their quality of life.


2020 ◽  
Vol 2020 ◽  
pp. 1-9 ◽  
Author(s):  
M. M. Kamruzzaman

Sign language encompasses the movement of the arms and hands as a means of communication for people with hearing disabilities. An automated sign recognition system requires two main courses of action: the detection of particular features and the categorization of particular input data. In the past, many approaches for classifying and detecting sign languages have been put forward for improving system performance. However, the recent progress in the computer vision field has geared us towards the further exploration of hand signs/gestures’ recognition with the aid of deep neural networks. The Arabic sign language has witnessed unprecedented research activities to recognize hand signs and gestures using the deep learning model. A vision-based system by applying CNN for the recognition of Arabic hand sign-based letters and translating them into Arabic speech is proposed in this paper. The proposed system will automatically detect hand sign letters and speaks out the result with the Arabic language with a deep learning model. This system gives 90% accuracy to recognize the Arabic hand sign-based letters which assures it as a highly dependable system. The accuracy can be further improved by using more advanced hand gestures recognizing devices such as Leap Motion or Xbox Kinect. After recognizing the Arabic hand sign-based letters, the outcome will be fed to the text into the speech engine which produces the audio of the Arabic language as an output.


Author(s):  
D. Ivanko ◽  
D. Ryumin ◽  
A. Karpov

<p><strong>Abstract.</strong> Inability to use speech interfaces greatly limits the deaf and hearing impaired people in the possibility of human-machine interaction. To solve this problem and to increase the accuracy and reliability of the automatic Russian sign language recognition system it is proposed to use lip-reading in addition to hand gestures recognition. Deaf and hearing impaired people use sign language as the main way of communication in everyday life. Sign language is a structured form of hand gestures and lips movements involving visual motions and signs, which is used as a communication system. Since sign language includes not only hand gestures, but also lip movements that mimic vocalized pronunciation, it is of interest to investigate how accurately such a visual speech can be recognized by a lip-reading system, especially considering the fact that the visual speech of hearing impaired people is often characterized with hyper-articulation, which should potentially facilitate its recognition. For this purpose, thesaurus of Russian sign language (TheRusLan) collected in SPIIRAS in 2018–19 was used. The database consists of color optical FullHD video recordings of 13 native Russian sign language signers (11 females and 2 males) from “Pavlovsk boarding school for the hearing impaired”. Each of the signers demonstrated 164 phrases for 5 times. This work covers the initial stages of this research, including data collection, data labeling, region-of-interest detection and methods for informative features extraction. The results of this study can later be used to create assistive technologies for deaf or hearing impaired people.</p>


Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


Author(s):  
Ala Addin I. Sidig ◽  
Hamzah Luqman ◽  
Sabri Mahmoud ◽  
Mohamed Mohandes

Sign language is the major means of communication for the deaf community. It uses body language and gestures such as hand shapes, lib patterns, and facial expressions to convey a message. Sign language is geography-specific, as it differs from one country to another. Arabic Sign language is used in all Arab countries. The availability of a comprehensive benchmarking database for ArSL is one of the challenges of the automatic recognition of Arabic Sign language. This article introduces KArSL database for ArSL, consisting of 502 signs that cover 11 chapters of ArSL dictionary. Signs in KArSL database are performed by three professional signers, and each sign is repeated 50 times by each signer. The database is recorded using state-of-art multi-modal Microsoft Kinect V2. We also propose three approaches for sign language recognition using this database. The proposed systems are Hidden Markov Models, deep learning images’ classification model applied on an image composed of shots of the video of the sign, and attention-based deep learning captioning system. Recognition accuracies of these systems indicate their suitability for such a large number of Arabic signs. The techniques are also tested on a publicly available database. KArSL database will be made freely available for interested researchers.


2021 ◽  
Author(s):  
Ishika Godage ◽  
Ruvan Weerasignhe ◽  
Damitha Sandaruwan

It is no doubt that communication plays a vital role in human life. There is, however, a significant population of hearing-impaired people who use non-verbal techniques for communication, which a majority of the people cannot understand. The predominant of these techniques is based on sign language, the main communication protocol among hearing impaired people. In this research, we propose a method to bridge the communication gap between hearing impaired people and others, which translates signed gestures into text. Most existing solutions, based on technologies such as Kinect, Leap Motion, Computer vision, EMG and IMU try to recognize and translate individual signs of hearing impaired people. The few approaches to sentence-level sign language recognition suffer from not being user-friendly or even practical owing to the devices they use. The proposed system is designed to provide full freedom to the user to sign an uninterrupted full sentence at a time. For this purpose, we employ two Myo armbands for gesture-capturing. Using signal processing and supervised learning based on a vocabulary of 49 words and 346 sentences for training with a single signer, we were able to achieve 75-80% word-level accuracy and 45-50% sentence level accuracy using gestural (EMG) and spatial (IMU) features for our signer-dependent experiment.


Sign language is the only method of communication for the hearing and speech impaired people around the world. Most of the speech and hearing-impaired people understand single sign language. Thus, there is an increasing demand for sign language interpreters. For regular people learning sign language is difficult, and for speech and hearing-impaired person, learning spoken language is impossible. There is a lot of research being done in the domain of automatic sign language recognition. Different methods such as, computer vision, data glove, depth sensors can be used to train a computer to interpret sign language. The interpretation is being done from sign to text, text to sign, speech to sign and sign to speech. Different countries use different sign languages, the signers of different sign languages are unable to communicate with each other. Analyzing the characteristic features of gestures provides insights about the sign language, some common features in sign languages gestures will help in designing a sign language recognition system. This type of system will help in reducing the communication gap between sign language users and spoken language users.


Author(s):  
Wael Suliman ◽  
Mohamed Deriche ◽  
Hamzah Luqman ◽  
Mohamed Mohandes

Sign in / Sign up

Export Citation Format

Share Document