3D Technologies and Applications in Sign Language

Author(s):  
Kiriakos Stefanidis ◽  
Dimitrios Konstantinidis ◽  
Athanasios Kalvourtzis ◽  
Kosmas Dimitropoulos ◽  
Petros Daras

Millions of people suffering from partial or complete hearing loss use variants of sign language to communicate with each other or hearing people in their everyday life. Thus, it is imperative to develop systems to assist these people by removing the barriers that affect their social inclusion. These systems should aim towards capturing sign language in an accurate way, classifying sign language to natural words and representing sign language by having avatars or synthesized videos execute the exact same moves that convey a meaning in the sign language. This chapter reviews current state-of-the-art approaches that attempt to solve sign language recognition and representation and analyzes the challenges they face. Furthermore, this chapter presents a novel AI-based solution to the problem of robust sign language capturing and representation, as well as a solution to the unavailability of annotated sign language datasets before limitations and directions for future work are discussed.

2021 ◽  
Vol 5 (2 (113)) ◽  
pp. 44-54
Author(s):  
Chingiz Kenshimov ◽  
Samat Mukhanov ◽  
Timur Merembayev ◽  
Didar Yedilkhan

For people with disabilities, sign language is the most important means of communication. Therefore, more and more authors of various papers and scientists around the world are proposing solutions to use intelligent hand gesture recognition systems. Such a system is aimed not only for those who wish to understand a sign language, but also speak using gesture recognition software. In this paper, a new benchmark dataset for Kazakh fingerspelling, able to train deep neural networks, is introduced. The dataset contains more than 10122 gesture samples for 42 alphabets. The alphabet has its own peculiarities as some characters are shown in motion, which may influence sign recognition. Research and analysis of convolutional neural networks, comparison, testing, results and analysis of LeNet, AlexNet, ResNet and EffectiveNet – EfficientNetB7 methods are described in the paper. EffectiveNet architecture is state-of-the-art (SOTA) and is supposed to be a new one compared to other architectures under consideration. On this dataset, we showed that the LeNet and EffectiveNet networks outperform other competing algorithms. Moreover, EffectiveNet can achieve state-of-the-art performance on nother hand gesture datasets. The architecture and operation principle of these algorithms reflect the effectiveness of their application in sign language recognition. The evaluation of the CNN model score is conducted by using the accuracy and penalty matrix. During training epochs, LeNet and EffectiveNet showed better results: accuracy and loss function had similar and close trends. The results of EffectiveNet were explained by the tools of the SHapley Additive exPlanations (SHAP) framework. SHAP explored the model to detect complex relationships between features in the images. Focusing on the SHAP tool may help to further improve the accuracy of the model


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Youngmin Na ◽  
Hyejin Yang ◽  
Jihwan Woo

Recognition and understanding of sign language can aid communication between nondeaf and deaf people. Recently, research groups have developed sign language recognition algorithms using multiple sensors. However, in everyday life, minimizing the number of sensors would still require the use of a sign language interpreter. In this study, a sign language classification method was developed using an accelerometer to recognize the Korean sign language alphabet. The accelerometer is worn on the proximal phalanx of the index finger of the dominant hand. Triaxial accelerometer signals were used to segment the sign gesture (i.e., the time period when a user is performing a sign) and recognize the 31 Korean sign language letters (producing a chance level of 3.2%). The vector sum of the accelerometer signals was used to segment the sign gesture with 98.9% segmentation accuracy, which is comparable to that of previous multisensor systems (99.49%). The system was able to classify the Korean sign language alphabet with 92.2% accuracy. The recognition accuracy of this approach was found to be higher than that of a previous work in the same sign language alphabet classification task. The findings demonstrate that a single-sensor accelerometer with simple features can be reliably used for Korean sign language alphabet recognition in everyday life.


Sensors ◽  
2018 ◽  
Vol 18 (7) ◽  
pp. 2208 ◽  
Author(s):  
Mohamed Aktham Ahmed ◽  
Bilal Bahaa Zaidan ◽  
Aws Alaa Zaidan ◽  
Mahmood Maher Salih ◽  
Muhammad Modi bin Lakulu

Author(s):  
Aleksejs Zorins ◽  
Pēteris Grabusts

The goal of the paper is reviewing several aspects of Sign Language Recognition problems focusing on Artificial Neural Network approach. The lack of automated Latvian Sign Language has identified and proposals of how to develop such a system have made. Tha authors use analytical, statistical methods as well as practical experiments with neural network software. The main results of the paper are description of main Sign Language Recognition problem solving methods with Artificial Neural Networks and directions of future work based on authors’ previous expertise.


2019 ◽  
Vol 7 (2) ◽  
pp. 43
Author(s):  
MALHOTRA POOJA ◽  
K. MANIAR CHIRAG ◽  
V. SANKPAL NIKHIL ◽  
R. THAKKAR HARDIK ◽  
◽  
...  

2016 ◽  
Vol 3 (3) ◽  
pp. 13
Author(s):  
VERMA VERSHA ◽  
PATIL SANDEEP B. ◽  
◽  

2020 ◽  
Vol 14 ◽  
Author(s):  
Vasu Mehra ◽  
Dhiraj Pandey ◽  
Aayush Rastogi ◽  
Aditya Singh ◽  
Harsh Preet Singh

Background:: People suffering from hearing and speaking disabilities have a few ways of communicating with other people. One of these is to communicate through the use of sign language. Objective:: Developing a system for sign language recognition becomes essential for deaf as well as a mute person. The recognition system acts as a translator between a disabled and an able person. This eliminates the hindrances in exchange of ideas. Most of the existing systems are very poorly designed with limited support for the needs of their day to day facilities. Methods:: The proposed system embedded with gesture recognition capability has been introduced here which extracts signs from a video sequence and displays them on screen. On the other hand, a speech to text as well as text to speech system is also introduced to further facilitate the grieved people. To get the best out of human computer relationship, the proposed solution consists of various cutting-edge technologies and Machine Learning based sign recognition models which have been trained by using Tensor Flow and Keras library. Result:: The proposed architecture works better than several gesture recognition techniques like background elimination and conversion to HSV because of sharply defined image provided to the model for classification. The results of testing indicate reliable recognition systems with high accuracy that includes most of the essential and necessary features for any deaf and dumb person in his/her day to day tasks. Conclusion:: It’s the need of current technological advances to develop reliable solutions which can be deployed to assist deaf and dumb people to adjust to normal life. Instead of focusing on a standalone technology, a plethora of them have been introduced in this proposed work. Proposed Sign Recognition System is based on feature extraction and classification. The trained model helps in identification of different gestures.


Sign in / Sign up

Export Citation Format

Share Document