scholarly journals Arabic Sign Language Recognition and Generating Arabic Speech Using Convolutional Neural Network

2020 ◽  
Vol 2020 ◽  
pp. 1-9 ◽  
Author(s):  
M. M. Kamruzzaman

Sign language encompasses the movement of the arms and hands as a means of communication for people with hearing disabilities. An automated sign recognition system requires two main courses of action: the detection of particular features and the categorization of particular input data. In the past, many approaches for classifying and detecting sign languages have been put forward for improving system performance. However, the recent progress in the computer vision field has geared us towards the further exploration of hand signs/gestures’ recognition with the aid of deep neural networks. The Arabic sign language has witnessed unprecedented research activities to recognize hand signs and gestures using the deep learning model. A vision-based system by applying CNN for the recognition of Arabic hand sign-based letters and translating them into Arabic speech is proposed in this paper. The proposed system will automatically detect hand sign letters and speaks out the result with the Arabic language with a deep learning model. This system gives 90% accuracy to recognize the Arabic hand sign-based letters which assures it as a highly dependable system. The accuracy can be further improved by using more advanced hand gestures recognizing devices such as Leap Motion or Xbox Kinect. After recognizing the Arabic hand sign-based letters, the outcome will be fed to the text into the speech engine which produces the audio of the Arabic language as an output.

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Feng Wen ◽  
Zixuan Zhang ◽  
Tianyiyi He ◽  
Chengkuo Lee

AbstractSign language recognition, especially the sentence recognition, is of great significance for lowering the communication barrier between the hearing/speech impaired and the non-signers. The general glove solutions, which are employed to detect motions of our dexterous hands, only achieve recognizing discrete single gestures (i.e., numbers, letters, or words) instead of sentences, far from satisfying the meet of the signers’ daily communication. Here, we propose an artificial intelligence enabled sign language recognition and communication system comprising sensing gloves, deep learning block, and virtual reality interface. Non-segmentation and segmentation assisted deep learning model achieves the recognition of 50 words and 20 sentences. Significantly, the segmentation approach splits entire sentence signals into word units. Then the deep learning model recognizes all word elements and reversely reconstructs and recognizes sentences. Furthermore, new/never-seen sentences created by new-order word elements recombination can be recognized with an average correct rate of 86.67%. Finally, the sign language recognition results are projected into virtual space and translated into text and audio, allowing the remote and bidirectional communication between signers and non-signers.


Author(s):  
Ala Addin I. Sidig ◽  
Hamzah Luqman ◽  
Sabri Mahmoud ◽  
Mohamed Mohandes

Sign language is the major means of communication for the deaf community. It uses body language and gestures such as hand shapes, lib patterns, and facial expressions to convey a message. Sign language is geography-specific, as it differs from one country to another. Arabic Sign language is used in all Arab countries. The availability of a comprehensive benchmarking database for ArSL is one of the challenges of the automatic recognition of Arabic Sign language. This article introduces KArSL database for ArSL, consisting of 502 signs that cover 11 chapters of ArSL dictionary. Signs in KArSL database are performed by three professional signers, and each sign is repeated 50 times by each signer. The database is recorded using state-of-art multi-modal Microsoft Kinect V2. We also propose three approaches for sign language recognition using this database. The proposed systems are Hidden Markov Models, deep learning images’ classification model applied on an image composed of shots of the video of the sign, and attention-based deep learning captioning system. Recognition accuracies of these systems indicate their suitability for such a large number of Arabic signs. The techniques are also tested on a publicly available database. KArSL database will be made freely available for interested researchers.


2021 ◽  
Author(s):  
Mahyudin Ritonga ◽  
Rasha M.Abd El-Aziz ◽  
Varsha Dr. ◽  
Maulik Bader Alazzam ◽  
Fawaz Alassery ◽  
...  

Abstract Exceptional research activities have been endorsed by the Arabic Sign Language for recognizing gestures and hand signs utilizing the deep learning model. Sign languages refer to the gestures, which are utilized by hearing impaired people for communication. These gestures are complex for understanding by normal people. Due to variation of Arabic Sign Language (ArSL) from one territory to another territory or between countries, the recognition of Arabic Sign Language (ArSL) became an arduous research problem. The recognition of Arabic Sign Language has been learned and implemented utilizing multiple traditional and intelligent approaches and there were only less attempts made for enhancing the process with the help of deep learning networks. The proposed system here encapsulates a Convolutional Neural Network (CNN) based machine learning technique, which utilizes wearable sensors for recognition of the Arabic Sign Language (ArSL). The model suits to all local Arabic gestures, which are used by the hearing-impaired people of the local Arabic community. The proposed system has a reasonable and moderate accuracy. Initially a deep Convolutional network is built for feature extraction, which is extracted from the collected data by the wearable sensors. These sensors are used for recognizing accurately the 30 hand sign letters of the Arabic sign language. DG5-V hand gloves embedded with wearable sensors are used for capturing the hand movements in the dataset. The CNN approach is utilized for the classification purpose. The hand gestures of the Arabic sign language are the input and the vocalized speech is the output of the proposed system. The results achieved a recognition rate of 90%. The proposed system was found highly efficient for translating hand gestures of the Arabic Sign Language into speech and writing.


Sign in / Sign up

Export Citation Format

Share Document