scholarly journals sEMG-Based Hand-Gesture Classification Using a Generative Flow Model

Sensors ◽  
2019 ◽  
Vol 19 (8) ◽  
pp. 1952 ◽  
Author(s):  
Wentao Sun ◽  
Huaxin Liu ◽  
Rongyu Tang ◽  
Yiran Lang ◽  
Jiping He ◽  
...  

Conventional pattern-recognition algorithms for surface electromyography (sEMG)-based hand-gesture classification have difficulties in capturing the complexity and variability of sEMG. The deep structures of deep learning enable the method to learn high-level features of data to improve both accuracy and robustness of a classification. However, the features learned through deep learning are incomprehensible, and this issue has precluded the use of deep learning in clinical applications where model comprehension is required. In this paper, a generative flow model (GFM), which is a recent flourishing branch of deep learning, is used with a SoftMax classifier for hand-gesture classification. The proposed approach achieves 63 . 86 ± 5 . 12 % accuracy in classifying 53 different hand gestures from the NinaPro database 5. The distribution of all 53 hand gestures is modelled by the GFM, and each dimension of the feature learned by the GFM is comprehensible using the reverse flow of the GFM. Moreover, the feature appears to be related to muscle synergy to some extent.

2019 ◽  
Vol 2019 ◽  
pp. 1-7 ◽  
Author(s):  
Peng Liu ◽  
Xiangxiang Li ◽  
Haiting Cui ◽  
Shanshan Li ◽  
Yafei Yuan

Hand gesture recognition is an intuitive and effective way for humans to interact with a computer due to its high processing speed and recognition accuracy. This paper proposes a novel approach to identify hand gestures in complex scenes by the Single-Shot Multibox Detector (SSD) deep learning algorithm with 19 layers of a neural network. A benchmark database with gestures is used, and general hand gestures in the complex scene are chosen as the processing objects. A real-time hand gesture recognition system based on the SSD algorithm is constructed and tested. The experimental results show that the algorithm quickly identifies humans’ hands and accurately distinguishes different types of gestures. Furthermore, the maximum accuracy is 99.2%, which is significantly important for human-computer interaction application.


Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2972
Author(s):  
Qinghua Gao ◽  
Shuo Jiang ◽  
Peter B. Shull

Hand gesture classification and finger angle estimation are both critical for intuitive human–computer interaction. However, most approaches study them in isolation. We thus propose a dual-output deep learning model to enable simultaneous hand gesture classification and finger angle estimation. Data augmentation and deep learning were used to detect spatial-temporal features via a wristband with ten modified barometric sensors. Ten subjects performed experimental testing by flexing/extending each finger at the metacarpophalangeal joint while the proposed model was used to classify each hand gesture and estimate continuous finger angles simultaneously. A data glove was worn to record ground-truth finger angles. Overall hand gesture classification accuracy was 97.5% and finger angle estimation R 2 was 0.922, both of which were significantly higher than shallow existing learning approaches used in isolation. The proposed method could be used in applications related to the human–computer interaction and in control environments with both discrete and continuous variables.


2021 ◽  
Vol 10 (4) ◽  
pp. 2223-2230
Author(s):  
Aseel Ghazi Mahmoud ◽  
Ahmed Mudheher Hasan ◽  
Nadia Moqbel Hassan

Recently, the recognition of human hand gestures is becoming a valuable technology for various applications like sign language recognition, virtual games and robotics control, video surveillance, and home automation. Owing to the recent development of deep learning and its excellent performance, deep learning-based hand gesture recognition systems can provide promising results. However, accurate recognition of hand gestures remains a substantial challenge that faces most of the recently existing recognition systems. In this paper, convolutional neural networks (CNN) framework with multiple layers for accurate, effective, and less complex human hand gesture recognition has been proposed. Since the images of the infrared hand gestures can provide accurate gesture information through the low illumination environment, the proposed system is tested and evaluated on a database of hand-based near-infrared which including ten gesture poses. Extensive experiments prove that the proposed system provides excellent results of accuracy, precision, sensitivity (recall), and F1-score. Furthermore, a comparison with recently existing systems is reported.


2021 ◽  
Vol 70 (11) ◽  
pp. 1714-1721
Author(s):  
Ik-Jin Kim ◽  
Su-Yeol Kim ◽  
Yong-Chan Lee ◽  
Yun-Jung Lee

Author(s):  
Sukhendra Singh ◽  
G. N. Rathna ◽  
Vivek Singhal

Introduction: Sign language is the only way to communicate for speech-impaired people. But this sign language is not known to normal people so this is the cause of barrier in communicating. This is the problem faced by speech impaired people. In this paper, we have presented our solution which captured hand gestures with Kinect camera and classified the hand gesture into its correct symbol. Method: We used Kinect camera not the ordinary web camera because the ordinary camera does not capture its 3d orientation or depth of an image from camera however Kinect camera can capture 3d image and this will make classification more accurate. Result: Kinect camera will produce a different image for hand gestures for ‘2’ and ‘V’ and similarly for ‘1’ and ‘I’ however, normal web camera will not be able to distinguish between these two. We used hand gesture for Indian sign language and our dataset had 46339, RGB images and 46339 depth images. 80% of the total images were used for training and the remaining 20% for testing. In total 36 hand gestures were considered to capture alphabets and alphabets from A-Z and 10 for numeric, 26 for digits from 0-9 were considered to capture alphabets and Keywords. Conclusion: Along with real-time implementation, we have also shown the comparison of the performance of the various machine learning models in which we have found out the accuracy of CNN on depth- images has given the most accurate performance than other models. All these resulted were obtained on PYNQ Z2 board.


Author(s):  
Sruthy Skaria ◽  
Da Huang ◽  
Akram Al-Hourani ◽  
Robin J. Evans ◽  
Margaret Lech

Sign in / Sign up

Export Citation Format

Share Document