scholarly journals Surface-Electromyography-Based Gesture Recognition Using a Multistream Fusion Strategy

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Zhouping Chen ◽  
Jianyu Yang ◽  
Hualong Xie
Sensors ◽  
2020 ◽  
Vol 20 (3) ◽  
pp. 672 ◽  
Author(s):  
Lin Chen ◽  
Jianting Fu ◽  
Yuheng Wu ◽  
Haochen Li ◽  
Bin Zheng

By training the deep neural network model, the hidden features in Surface Electromyography(sEMG) signals can be extracted. The motion intention of the human can be predicted by analysis of sEMG. However, the models recently proposed by researchers often have a large number of parameters. Therefore, we designed a compact Convolution Neural Network (CNN) model, which not only improves the classification accuracy but also reduces the number of parameters in the model. Our proposed model was validated on the Ninapro DB5 Dataset and the Myo Dataset. The classification accuracy of gesture recognition achieved good results.


Author(s):  
Yangwei Cheng ◽  
Gongfa Li ◽  
Mingchao Yu ◽  
Du Jiang ◽  
Juntong Yun ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Wentao Wei ◽  
Hong Hong ◽  
Xiaoli Wu

Hand gesture recognition based on surface electromyography (sEMG) plays an important role in the field of biomedical and rehabilitation engineering. Recently, there is a remarkable progress in gesture recognition using high-density surface electromyography (HD-sEMG) recorded by sensor arrays. On the other hand, robust gesture recognition using multichannel sEMG recorded by sparsely placed sensors remains a major challenge. In the context of multiview deep learning, this paper presents a hierarchical view pooling network (HVPN) framework, which improves multichannel sEMG-based gesture recognition by learning not only view-specific deep features but also view-shared deep features from hierarchically pooled multiview feature spaces. Extensive intrasubject and intersubject evaluations were conducted on the large-scale noninvasive adaptive prosthetics (NinaPro) database to comprehensively evaluate our proposed HVPN framework. Results showed that when using 200 ms sliding windows to segment data, the proposed HVPN framework could achieve the intrasubject gesture recognition accuracy of 88.4%, 85.8%, 68.2%, 72.9%, and 90.3% and the intersubject gesture recognition accuracy of 84.9%, 82.0%, 65.6%, 70.2%, and 88.9% on the first five subdatabases of NinaPro, respectively, which outperformed the state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document