scholarly journals Development of a Verbal Robot Hand Gesture Recognition System

2021 ◽  
Vol 16 ◽  
pp. 573-583
Author(s):  
Chingis Kenshimov ◽  
Talgat Sundetov ◽  
Murat Kunelbayev ◽  
Zhazira Amirgaliyeva ◽  
Didar Yedilkhan ◽  
...  

This article analyzes the most famous sign languages, the correlation of sign languages, and also considers the development of a verbal robot hand gesture recognition system in relation to the Kazakh language. The proposed system contains a touch sensor, in which the contact of the electrical property of the user's skin is measured, which provides more accurate information for simulating and indicating the gestures of the robot hand. Within the framework of the system, the speed and accuracy of recognition of each gesture of the verbal robot are calculated. The average recognition accuracy was over 98%. The detection time was 3ms on a 1.9 GHz Jetson Nano processor, which is enough to create a robot showing natural language gestures. A complete fingerprint of the Kazakh sign language for a verbal robot is also proposed. To improve the quality of gesture recognition, a machine learning method was used. The operability of the developed technique for recognizing gestures by a verbal robot was tested, and on the basis of computational experiments, the effectiveness of algorithms and software for responding to a verbal robot to a voice command was evaluated based on automatic recognition of a multilingual human voice. Thus, we can assume that the authors have proposed an intelligent verbal complex implemented in Python with the CMUSphinx communication module and the PyOpenGL graphical command execution simulator. Robot manipulation module based on 3D modeling from ABB.

Hearing impaired individuals use sign languages to communicate with others within the community. Because of the wide spread use of this language, hard-of-hearing individuals can easily understand it but it is not known by a lot of normal people. In this paper a hand gesture recognition system has been developed to overcome this problem, for those who don't recognize sign language to communicate simply with hard-of-hearing individuals. In this paper a computer vision-based system is designed to detect sign Language. Datasets used in this paper are binary images. These images are given to the convolution neural network (CNN). This model extracts the features of the image and classifies the images, and it recognises the gestures. The gestures used in this paper are of American Sign Language. In real time system the images are converted to binary images using Hue, Saturation, and Value (HSV) colour model. In this model 87.5% of data is used for training and 12.5% of data is used for testing and the accuracy obtained with this model is 97%.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2540
Author(s):  
Zhipeng Yu ◽  
Jianghai Zhao ◽  
Yucheng Wang ◽  
Linglong He ◽  
Shaonan Wang

In recent years, surface electromyography (sEMG)-based human–computer interaction has been developed to improve the quality of life for people. Gesture recognition based on the instantaneous values of sEMG has the advantages of accurate prediction and low latency. However, the low generalization ability of the hand gesture recognition method limits its application to new subjects and new hand gestures, and brings a heavy training burden. For this reason, based on a convolutional neural network, a transfer learning (TL) strategy for instantaneous gesture recognition is proposed to improve the generalization performance of the target network. CapgMyo and NinaPro DB1 are used to evaluate the validity of our proposed strategy. Compared with the non-transfer learning (non-TL) strategy, our proposed strategy improves the average accuracy of new subject and new gesture recognition by 18.7% and 8.74%, respectively, when up to three repeated gestures are employed. The TL strategy reduces the training time by a factor of three. Experiments verify the transferability of spatial features and the validity of the proposed strategy in improving the recognition accuracy of new subjects and new gestures, and reducing the training burden. The proposed TL strategy provides an effective way of improving the generalization ability of the gesture recognition system.


2012 ◽  
Vol 6 ◽  
pp. 98-107 ◽  
Author(s):  
Amit Gupta ◽  
Vijay Kumar Sehrawat ◽  
Mamta Khosla

Author(s):  
Vijayalakshmi G V ◽  
Ajay J ◽  
Pavithra S ◽  
Pooja Eronisha A ◽  
Vanijayam K

Sign in / Sign up

Export Citation Format

Share Document