scholarly journals Arabic Sign Language Recognition System for Alphabets Using Machine Learning Techniques

2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Gamal Tharwat ◽  
Abdelmoty M. Ahmed ◽  
Belgacem Bouallegue

In recent years, the role of pattern recognition in systems based on human computer interaction (HCI) has spread in terms of computer vision applications and machine learning, and one of the most important of these applications is to recognize the hand gestures used in dealing with deaf people, in particular to recognize the dashed letters in surahs of the Quran. In this paper, we suggest an Arabic Alphabet Sign Language Recognition System (AArSLRS) using the vision-based approach. The proposed system consists of four stages: the stage of data processing, preprocessing of data, feature extraction, and classification. The system deals with three types of datasets: data dealing with bare hands and a dark background, data dealing with bare hands, but with a light background, and data dealing with hands wearing dark colored gloves. AArSLRS begins with obtaining an image of the alphabet gestures, then revealing the hand from the image and isolating it from the background using one of the proposed methods, after which the hand features are extracted according to the selection method used to extract them. Regarding the classification process in this system, we have used supervised learning techniques for the classification of 28-letter Arabic alphabet using 9240 images. We focused on the classification for 14 alphabetic letters that represent the first Quran surahs in the Quranic sign language (QSL). AArSLRS achieved an accuracy of 99.5% for the K-Nearest Neighbor (KNN) classifier.

Author(s):  
Paul D. Rosero-Montalvo ◽  
Pamela Godoy-Trujillo ◽  
Edison Flores-Bosmediano ◽  
Jorge Carrascal-Garcia ◽  
Santiago Otero-Potosi ◽  
...  

Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1739
Author(s):  
Hamzah Luqman ◽  
El-Sayed M. El-Alfy

Sign languages are the main visual communication medium between hard-hearing people and their societies. Similar to spoken languages, they are not universal and vary from region to region, but they are relatively under-resourced. Arabic sign language (ArSL) is one of these languages that has attracted increasing attention in the research community. However, most of the existing and available works on sign language recognition systems focus on manual gestures, ignoring other non-manual information needed for other language signals such as facial expressions. One of the main challenges of not considering these modalities is the lack of suitable datasets. In this paper, we propose a new multi-modality ArSL dataset that integrates various types of modalities. It consists of 6748 video samples of fifty signs performed by four signers and collected using Kinect V2 sensors. This dataset will be freely available for researchers to develop and benchmark their techniques for further advancement of the field. In addition, we evaluated the fusion of spatial and temporal features of different modalities, manual and non-manual, for sign language recognition using the state-of-the-art deep learning techniques. This fusion boosted the accuracy of the recognition system at the signer-independent mode by 3.6% compared with manual gestures.


IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 59612-59627
Author(s):  
Mohamed A. Bencherif ◽  
Mohammed Algabri ◽  
Mohamed A. Mekhtiche ◽  
Mohammed Faisal ◽  
Mansour Alsulaiman ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document