Learn the Temporal-Spatial Feature of sEMG via Dual-Flow Network

2019 ◽  
Vol 16 (04) ◽  
pp. 1941004 ◽  
Author(s):  
Runze Tong ◽  
Yue Zhang ◽  
Hongfeng Chen ◽  
Honghai Liu

Surface electromyography (sEMG) signals have been widely used in human–machine interaction, providing more nature control expedience for external devices. However, due to the instability of sEMG, it is hard to extract consistent sEMG patterns for motion recognition. This paper proposes a dual-flow network to extract the temporal-spatial feature of sEMG for gesture recognition. The proposed network model uses convolutional neural network (CNN) and long short-term memory methods (LSTM) to, respectively, extract the spatial feature and temporal feature of sEMG, simultaneously. These features extracted by CNN and LSTM are merged into temporal-spatial feature to form an end-to-end network. A dataset was constructed for testing the performance of the network. In this database, the average recognition accuracy by using our dual-flow model reached 78.31%, which was improved by 6.69% compared to the baseline CNN (71.67%). In addition, NinaPro DB1 is also used to evaluate the proposed methods, receiving 1.86% higher recognition accuracy than the baseline CNN classifier. It is believed that the proposed dual-flow network owns the merit in extracting stable sEMG feature for gesture recognition, and can be further applied into practical applications.

2021 ◽  
pp. 107754632110093
Author(s):  
Zhenying Gong ◽  
Tao Wang ◽  
Zhen Zhao ◽  
Xin Liu ◽  
Yina Guo ◽  
...  

The motor-based brain–computer interface is widely used in the exoskeleton reconstruction of patients with muscle weakness and to enhance the operating experience of somatosensory game customers through the combination of actions and electroencephalography signals. However, the recognition algorithms in traditional motor-based brain–computer interfaces have problems such as “brain–computer interface blindness” (recognition accuracy is less than 70%) and “one person one model.” In this study, a regularized long short-term memory algorithm and a hardware platform for gesture recognition by using the motor-based brain–computer interface are proposed. Experimental results show that the gesture recognition accuracy rate based on the motor brain–computer interface is up to 95.69%, which is significantly better than that of other algorithms. The proposed model enhances the applicability and generalization ability of the brain–computer interface, for which the practicability and effectiveness are verified.


2018 ◽  
Vol 14 (7) ◽  
pp. 155014771879075 ◽  
Author(s):  
Kiwon Rhee ◽  
Hyun-Chool Shin

In the recognition of electromyogram-based hand gestures, the recognition accuracy may be degraded during the actual stage of practical applications for various reasons such as electrode positioning bias and different subjects. Besides these, the change in electromyogram signals due to different arm postures even for identical hand gestures is also an important issue. We propose an electromyogram-based hand gesture recognition technique robust to diverse arm postures. The proposed method uses both the signals of the accelerometer and electromyogram simultaneously to recognize correct hand gestures even for various arm postures. For the recognition of hand gestures, the electromyogram signals are statistically modeled considering the arm postures. In the experiments, we compared the cases that took into account the arm postures with the cases that disregarded the arm postures for the recognition of hand gestures. In the cases in which varied arm postures were disregarded, the recognition accuracy for correct hand gestures was 54.1%, whereas the cases using the method proposed in this study showed an 85.7% average recognition accuracy for hand gestures, an improvement of more than 31.6%. In this study, accelerometer and electromyogram signals were used simultaneously, which compensated the effect of different arm postures on the electromyogram signals and therefore improved the recognition accuracy of hand gestures.


2021 ◽  
Vol 11 (15) ◽  
pp. 6824
Author(s):  
Jin-Su Kim ◽  
Min-Gu Kim ◽  
Sung-Bum Pan

Electromyogram (EMG) signals cannot be forged and have the advantage of being able to change the registered data as they are characterized by the waveform, which varies depending on the gesture. In this paper, a two-step biometrics method was proposed using EMG signals based on a convolutional neural network–long short-term memory (CNN-LSTM) network. After preprocessing of the EMG signals, the time domain features and LSTM network were used to examine whether the gesture matched, and single biometrics was performed if the gesture matched. In single biometrics, EMG signals were converted into a two-dimensional spectrogram, and training and classification were performed through the CNN-LSTM network. Data fusion of the gesture recognition and single biometrics was performed in the form of an AND. The experiment used Ninapro EMG signal data as the proposed two-step biometrics method, and the results showed 83.91% gesture recognition performance and 99.17% single biometrics performance. In addition, the false acceptance rate (FAR) was observed to have been reduced by 64.7% through data fusion.


2015 ◽  
Vol 40 (2) ◽  
pp. 191-195 ◽  
Author(s):  
Łukasz Brocki ◽  
Krzysztof Marasek

Abstract This paper describes a Deep Belief Neural Network (DBNN) and Bidirectional Long-Short Term Memory (LSTM) hybrid used as an acoustic model for Speech Recognition. It was demonstrated by many independent researchers that DBNNs exhibit superior performance to other known machine learning frameworks in terms of speech recognition accuracy. Their superiority comes from the fact that these are deep learning networks. However, a trained DBNN is simply a feed-forward network with no internal memory, unlike Recurrent Neural Networks (RNNs) which are Turing complete and do posses internal memory, thus allowing them to make use of longer context. In this paper, an experiment is performed to make a hybrid of a DBNN with an advanced bidirectional RNN used to process its output. Results show that the use of the new DBNN-BLSTM hybrid as the acoustic model for the Large Vocabulary Continuous Speech Recognition (LVCSR) increases word recognition accuracy. However, the new model has many parameters and in some cases it may suffer performance issues in real-time applications.


2014 ◽  
Vol 926-930 ◽  
pp. 1534-1537
Author(s):  
Qing Zhu ◽  
Tong Zhong Liu

Kinect, as an individual feeling surrounding peripherals, not only changes the users' entertainment experience, also provides a new way between users and machine interaction. Using easily and improving the gesture recognition accuracy, the paper presents a method that a device sensors are used to get the more than 20 joint point information, and a specific position is distinguished by the Euclidean distance and Angle of each joint point. The result shows that the recognition rate is higher using the method to identify the position and extension action library is used to meet the needs of determining different positions at any time. On this basis, the paper has been implemented in various fields or carried on the beneficial exploration of possible application.


Sign in / Sign up

Export Citation Format

Share Document