A new extension of FDOSM based on Pythagorean fuzzy environment for evaluating and benchmarking sign language recognition systems

Author(s):  
Mohammed S. Al-Samarraay ◽  
Mahmood M. Salih ◽  
Mohamed A. Ahmed ◽  
A. A. Zaidan ◽  
O. S. Albahri ◽  
...  
Author(s):  
Soraia Silva Prietch ◽  
Polianna dos Santos Paim ◽  
Ivan Olmos-Pineda ◽  
Josefina Guerrero García ◽  
Juan Manuel Gonzalez Calleros

Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1739
Author(s):  
Hamzah Luqman ◽  
El-Sayed M. El-Alfy

Sign languages are the main visual communication medium between hard-hearing people and their societies. Similar to spoken languages, they are not universal and vary from region to region, but they are relatively under-resourced. Arabic sign language (ArSL) is one of these languages that has attracted increasing attention in the research community. However, most of the existing and available works on sign language recognition systems focus on manual gestures, ignoring other non-manual information needed for other language signals such as facial expressions. One of the main challenges of not considering these modalities is the lack of suitable datasets. In this paper, we propose a new multi-modality ArSL dataset that integrates various types of modalities. It consists of 6748 video samples of fifty signs performed by four signers and collected using Kinect V2 sensors. This dataset will be freely available for researchers to develop and benchmark their techniques for further advancement of the field. In addition, we evaluated the fusion of spatial and temporal features of different modalities, manual and non-manual, for sign language recognition using the state-of-the-art deep learning techniques. This fusion boosted the accuracy of the recognition system at the signer-independent mode by 3.6% compared with manual gestures.


Sign in / Sign up

Export Citation Format

Share Document