scholarly journals Sign Language Semantic Translation System using Ontology and Deep Learning

Author(s):  
Eman K Elsayed ◽  
Doaa R.
2021 ◽  
Vol 1 (1) ◽  
pp. 71-80
Author(s):  
Febri Damatraseta ◽  
Rani Novariany ◽  
Muhammad Adlan Ridhani

BISINDO is one of Indonesian sign language, which do not have many facilities to implement. Because it can cause deaf people have difficulty to live their daily life. Therefore, this research tries to offer an recognition or translation system of the BISINDO alphabet into a text. The system is expected to help deaf people to communicate in two directions. In this study the problems encountered is small datasets. Therefore this research will do the testing of hand gesture recognition, by comparing two model CNN algorithms, that is LeNet-5 and Alexnet. This test will look for which classification technique is better if the dataset conditions in an amount that does not reach 1000 images in each class. After testing, the results found that the CNN technique on the Alexnet architectural model is better to used, this is because when doing the testing process by using still-image and Alexnet model data which has been released in training process, Alexnet model data gives greater prediction results that is equal to 76%. While the LeNet model is only able to predict with the percentage of 19%. When that Alexnet data model used on the system offered, only able to predict correcly by 60%.   Keywords: Sign language, BISINDO, Computer Vision, Hand Gesture Recognition, Skin Segmentation, CIELab, Deep Learning, CNN.


2021 ◽  
Vol 14 (1) ◽  
pp. 316-325
Author(s):  
Eman Elsayed ◽  
◽  
Doaa Fathy ◽  

Dynamic Sign Language Recognition aims to recognize hand gestures of any person. Dynamic Sign Language Recognition systems have challenges in recognizing the semantic of hand gestures. These challenges come from the personal differences in hand signs from one person to another. Real-life video gesture frames couldn’t be treated as frame-level as a static sign. This research proposes a semantic translation system for dynamic hand gestures using deep learning and ontology. We used the proposed MSLO (Multi Sign Language Ontology) in the semantic translation step. Also, any user can retrain the system to be a personal one. We used Three-dimensional Convolutional Neural Networks followed by Convolutional long short-term memory to improve the recognition accuracy in Dynamic sign language recognition. We applied the proposed system on three dynamic gesture datasets from color videos. The recognition accuracy average was 97.4%. We did all the training and testing processes on the Graphics Processing Unit with the support of Google Colab. Using "Google Colab" in the training process decreases the average run time by about 87.9%. In addition to adding semantic in dynamic sign language translation, the proposed system achieves good results compared to some dynamic sign language recognition systems.


A recent surge in interest to create translation systems inclusive of sign languages is engendered by not only the rapid development of various approaches in the field of machine translation, but also the increased awareness of the struggles of the deaf community to comprehend written English. This paper describes the working of SILANT (SIgn LANguage Translator), a machine translation system that converts English to American Sign Language (ASL) using the principles of Natural Language Processing (NLP) and Deep Learning. The translation of English text is based on transformational rules which generates an intermediate representation which in turn spawns appropriate ASL animations. Although this kind of rule-based translation is notorious for being an accurate yet narrow approach, in this system, we broaden the scope of the translation using a synonym network and paraphrasing module which implements deep learning algorithms. In doing so, we are able to achieve both the accuracy of a rule-based approach and the scale of a deep learning one.


Author(s):  
Safayet Anowar Shurid ◽  
Khandaker Habibul Amin ◽  
Md. Shahnawaz Mirbahar ◽  
Dolan Karmaker ◽  
Mohammad Tanvir Mahtab ◽  
...  

Author(s):  
Ala Addin I. Sidig ◽  
Hamzah Luqman ◽  
Sabri Mahmoud ◽  
Mohamed Mohandes

Sign language is the major means of communication for the deaf community. It uses body language and gestures such as hand shapes, lib patterns, and facial expressions to convey a message. Sign language is geography-specific, as it differs from one country to another. Arabic Sign language is used in all Arab countries. The availability of a comprehensive benchmarking database for ArSL is one of the challenges of the automatic recognition of Arabic Sign language. This article introduces KArSL database for ArSL, consisting of 502 signs that cover 11 chapters of ArSL dictionary. Signs in KArSL database are performed by three professional signers, and each sign is repeated 50 times by each signer. The database is recorded using state-of-art multi-modal Microsoft Kinect V2. We also propose three approaches for sign language recognition using this database. The proposed systems are Hidden Markov Models, deep learning images’ classification model applied on an image composed of shots of the video of the sign, and attention-based deep learning captioning system. Recognition accuracies of these systems indicate their suitability for such a large number of Arabic signs. The techniques are also tested on a publicly available database. KArSL database will be made freely available for interested researchers.


Sign in / Sign up

Export Citation Format

Share Document