scholarly journals Development of arabic sign language dictionary using 3D avatar technologies

Author(s):  
Ahmed H. Aliwy ◽  
Ahmed A. Alethary

<span>The arabic sign language (ArSL) is the natural language of the deaf community in Arabic countries. ArSL suffers from a lack of resources such as unified dictionaries and corpora. In this work, a dictionary of Arabic language to ArSL has been constructed as a part of a translation system. The Arabic words are converted into hamburg notation system (HamNoSys) using eSign editor Software. HamNoSys was used to create manual parameters (handshape, hand orientation, hand location, and hand movement), while non-manual parameters (facial expressions, shoulder raising, mouthing gesture, head tilting, and body movement) added by using (mouth, face, and limbs) in the eSign editor software. The sign then converted to the sign gesture markup language (SiGML) file, and later 3D avatar interprets the SiGML file scripts to the animated sign. The constructed dictionary has three thousand signs; therefore, it can be adopted for the translation system in which written text can be transformed into sign language and can be utilized for the education of deaf people. The dictionary will be available as a free resource for researchers. It is hard and time-consuming work, but it is an essential step in machine translation of whole Arabic text to ArSL with 3D animations. </span>

2018 ◽  
Vol 3 (2) ◽  
pp. 57-70
Author(s):  
Nadia Aouiti ◽  
Mohamed Jemni

This research paper presents our ongoing project aiming at translating in real time an Arabic text to Arabic Sign Language (ArSL). This project is a part of a Web application [1] based on the technology of the avatar (animation in the virtual world). The input of the system is a text in natural language. The output is a real-time and online interpretation in sign language [2]. Our work focuses on the Arabic language as the text in the input, which needs many treatments due to the particularity of this language. Our solution starts from the linguistic treatment of the Arabic sentence, passing through the definition and the generation of Arabic Annotation Gloss system and coming finally to the generation of an animated sentence using the avatar technology.


2020 ◽  
Vol 2020 ◽  
pp. 1-9 ◽  
Author(s):  
M. M. Kamruzzaman

Sign language encompasses the movement of the arms and hands as a means of communication for people with hearing disabilities. An automated sign recognition system requires two main courses of action: the detection of particular features and the categorization of particular input data. In the past, many approaches for classifying and detecting sign languages have been put forward for improving system performance. However, the recent progress in the computer vision field has geared us towards the further exploration of hand signs/gestures’ recognition with the aid of deep neural networks. The Arabic sign language has witnessed unprecedented research activities to recognize hand signs and gestures using the deep learning model. A vision-based system by applying CNN for the recognition of Arabic hand sign-based letters and translating them into Arabic speech is proposed in this paper. The proposed system will automatically detect hand sign letters and speaks out the result with the Arabic language with a deep learning model. This system gives 90% accuracy to recognize the Arabic hand sign-based letters which assures it as a highly dependable system. The accuracy can be further improved by using more advanced hand gestures recognizing devices such as Leap Motion or Xbox Kinect. After recognizing the Arabic hand sign-based letters, the outcome will be fed to the text into the speech engine which produces the audio of the Arabic language as an output.


Sign in / Sign up

Export Citation Format

Share Document