Sign Language Gesture Recognition Using Doppler Radar and Deep Learning

Author(s):  
Hovannes Kulhandjian ◽  
Prakshi Sharma ◽  
Michel Kulhandjian ◽  
Claude D'Amours
Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3937
Author(s):  
Seungeon Song ◽  
Bongseok Kim ◽  
Sangdong Kim ◽  
Jonghun Lee

Recently, Doppler radar-based foot gesture recognition has attracted attention as a hands-free tool. Doppler radar-based recognition for various foot gestures is still very challenging. So far, no studies have yet dealt deeply with recognition of various foot gestures based on Doppler radar and a deep learning model. In this paper, we propose a method of foot gesture recognition using a new high-compression radar signature image and deep learning. By means of a deep learning AlexNet model, a new high-compression radar signature is created by extracting dominant features via Singular Value Decomposition (SVD) processing; four different foot gestures including kicking, swinging, sliding, and tapping are recognized. Instead of using an original radar signature, the proposed method improves the memory efficiency required for deep learning training by using a high-compression radar signature. Original and reconstructed radar images with high compression values of 90%, 95%, and 99% were applied for the deep learning AlexNet model. As experimental results, movements of all four different foot gestures and of a rolling baseball were recognized with an accuracy of approximately 98.64%. In the future, due to the radar’s inherent robustness to the surrounding environment, this foot gesture recognition sensor using Doppler radar and deep learning will be widely useful in future automotive and smart home industry fields.


2021 ◽  
Vol 1 (1) ◽  
pp. 71-80
Author(s):  
Febri Damatraseta ◽  
Rani Novariany ◽  
Muhammad Adlan Ridhani

BISINDO is one of Indonesian sign language, which do not have many facilities to implement. Because it can cause deaf people have difficulty to live their daily life. Therefore, this research tries to offer an recognition or translation system of the BISINDO alphabet into a text. The system is expected to help deaf people to communicate in two directions. In this study the problems encountered is small datasets. Therefore this research will do the testing of hand gesture recognition, by comparing two model CNN algorithms, that is LeNet-5 and Alexnet. This test will look for which classification technique is better if the dataset conditions in an amount that does not reach 1000 images in each class. After testing, the results found that the CNN technique on the Alexnet architectural model is better to used, this is because when doing the testing process by using still-image and Alexnet model data which has been released in training process, Alexnet model data gives greater prediction results that is equal to 76%. While the LeNet model is only able to predict with the percentage of 19%. When that Alexnet data model used on the system offered, only able to predict correcly by 60%.   Keywords: Sign language, BISINDO, Computer Vision, Hand Gesture Recognition, Skin Segmentation, CIELab, Deep Learning, CNN.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 192527-192542
Author(s):  
Muneer Al-Hammadi ◽  
Ghulam Muhammad ◽  
Wadood Abdul ◽  
Mansour Alsulaiman ◽  
Mohammed A. Bencherif ◽  
...  

2020 ◽  
Vol 38 (6A) ◽  
pp. 926-937
Author(s):  
Abdulwahab A. Abdulhussein ◽  
Firas A. Raheem

An American Sign Language (ASL) is a complex language. It is depending on the special gesture stander of marks. These marks are represented by hands with assistance by facial expression and body posture. ASL is the main communication language of deaf and people who have hard hearing from North America and other parts of the world. In this paper, Gesture recognition is proposed of static ASL using Deep Learning. The contribution consists of two solutions to the problem. The first one is resized with Bicubic static ASL binary images. Besides that, good recognition results in of detection the boundary hand using the Robert edge detection method. The second solution is to classify the 24 alphabets static characters of ASL using Convolution Neural Network (CNN) and Deep Learning. The classification accuracy equals to 99.3 % and the error of loss function is 0.0002. According to 36 minutes with 15 seconds of elapsed time result and 100 iterations. The training is fast and gives the very good results, in comparison with other related works of CNN, SVM, and ANN for training.


ACTA IMEKO ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 97
Author(s):  
Emanuele Buchicchio ◽  
Francesco Santoni ◽  
Alessio De Angelis ◽  
Antonio Moschitta ◽  
Paolo Carbone

<p class="Abstract"><span lang="EN-US">Gesture recognition is a fundamental step to enable efficient communication for the deaf through the automated translation of sign language. This work proposes the usage of a high-precision magnetic positioning system for 3D positioning and orientation tracking of the fingers and hands palm. The gesture is reconstructed by the MagIK (magnetic and inverse kinematics) method and then processed by a deep learning gesture classification model trained to recognize the gestures associated with the sign language alphabet. Results confirm the limits of vision-based systems and show that the proposed method based on hand skeleton reconstruction has good generalization properties. The proposed system, which combines sensor-based gesture acquisition and deep learning techniques for gesture recognition, provides a 100% classification accuracy, signer independent, after a few hours of training using transfer learning technique on well-known ResNet CNN architecture. The proposed classification model training method can be applied to other sensor-based gesture tracking systems and other applications, regardless of the specific data acquisition technology.</span></p>


Sign in / Sign up

Export Citation Format

Share Document