scholarly journals A Hierarchical View Pooling Network for Multichannel Surface Electromyography-Based Gesture Recognition

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Wentao Wei ◽  
Hong Hong ◽  
Xiaoli Wu

Hand gesture recognition based on surface electromyography (sEMG) plays an important role in the field of biomedical and rehabilitation engineering. Recently, there is a remarkable progress in gesture recognition using high-density surface electromyography (HD-sEMG) recorded by sensor arrays. On the other hand, robust gesture recognition using multichannel sEMG recorded by sparsely placed sensors remains a major challenge. In the context of multiview deep learning, this paper presents a hierarchical view pooling network (HVPN) framework, which improves multichannel sEMG-based gesture recognition by learning not only view-specific deep features but also view-shared deep features from hierarchically pooled multiview feature spaces. Extensive intrasubject and intersubject evaluations were conducted on the large-scale noninvasive adaptive prosthetics (NinaPro) database to comprehensively evaluate our proposed HVPN framework. Results showed that when using 200 ms sliding windows to segment data, the proposed HVPN framework could achieve the intrasubject gesture recognition accuracy of 88.4%, 85.8%, 68.2%, 72.9%, and 90.3% and the intersubject gesture recognition accuracy of 84.9%, 82.0%, 65.6%, 70.2%, and 88.9% on the first five subdatabases of NinaPro, respectively, which outperformed the state-of-the-art methods.


2021 ◽  
Vol 15 ◽  
Author(s):  
Jianting Fu ◽  
Shizhou Cao ◽  
Linqin Cai ◽  
Lechan Yang

Finger gesture recognition (FGR) plays a crucial role in achieving, for example, artificial limb control and human-computer interaction. Currently, the most common methods of FGR are visual-based, voice-based, and surface electromyography (EMG)-based ones. Among them, surface EMG-based FGR is very popular and successful because surface EMG is a cumulative bioelectric signal from the surface of the skin that can accurately and intuitively represent the force of the fingers. However, existing surface EMG-based methods still cannot fully satisfy the required recognition accuracy for artificial limb control as the lack of high-precision sensor and high-accurate recognition model. To address this issue, this study proposes a novel FGR model that consists of sensing and classification of surface EMG signals (SC-FGR). In the proposed SC-FGR model, wireless sensors with high-precision surface EMG are first developed for acquiring multichannel surface EMG signals from the forearm. Its resolution is 16 Bits, the sampling rate is 2 kHz, the common-mode rejection ratio (CMRR) is less than 70 dB, and the short-circuit noise (SCN) is less than 1.5 μV. In addition, a convolution neural network (CNN)-based classification algorithm is proposed to achieve FGR based on acquired surface EMG signals. The CNN is trained on a spectrum map transformed from the time-domain surface EMG by continuous wavelet transform (CWT). To evaluate the proposed SC-FGR model, we compared it with seven state-of-the-art models. The experimental results demonstrate that SC-FGR achieves 97.5% recognition accuracy on eight kinds of finger gestures with five subjects, which is much higher than that of comparable models.





IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Zhouping Chen ◽  
Jianyu Yang ◽  
Hualong Xie


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 222
Author(s):  
Tao Li ◽  
Chenqi Shi ◽  
Peihao Li ◽  
Pengpeng Chen

In this paper, we propose a novel gesture recognition system based on a smartphone. Due to the limitation of Channel State Information (CSI) extraction equipment, existing WiFi-based gesture recognition is limited to the microcomputer terminal equipped with Intel 5300 or Atheros 9580 network cards. Therefore, accurate gesture recognition can only be performed in an area relatively fixed to the transceiver link. The new gesture recognition system proposed by us breaks this limitation. First, we use nexmon firmware to obtain 256 CSI subcarriers from the bottom layer of the smartphone in IEEE 802.11ac mode on 80 MHz bandwidth to realize the gesture recognition system’s mobility. Second, we adopt the cross-correlation method to integrate the extracted CSI features in the time and frequency domain to reduce the influence of changes in the smartphone location. Third, we use a new improved DTW algorithm to classify and recognize gestures. We implemented vast experiments to verify the system’s recognition accuracy at different distances in different directions and environments. The results show that the system can effectively improve the recognition accuracy.



2021 ◽  
Vol 95 ◽  
pp. 103456
Author(s):  
Tiwana Varrecchia ◽  
Alberto Ranavolo ◽  
Silvia Conforto ◽  
Alessandro Marco De Nunzio ◽  
Michail Arvanitidis ◽  
...  


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 534
Author(s):  
Huogen Wang

The paper proposes an effective continuous gesture recognition method, which includes two modules: segmentation and recognition. In the segmentation module, the video frames are divided into gesture frames and transitional frames by using the information of hand motion and appearance, and continuous gesture sequences are segmented into isolated sequences. In the recognition module, our method exploits the spatiotemporal information embedded in RGB and depth sequences. For the RGB modality, our method adopts Convolutional Long Short-Term Memory Networks to learn long-term spatiotemporal features from short-term spatiotemporal features obtained from a 3D convolutional neural network. For the depth modality, our method converts a sequence into Dynamic Images and Motion Dynamic Images through weighted rank pooling and feed them into Convolutional Neural Networks, respectively. Our method has been evaluated on both ChaLearn LAP Large-scale Continuous Gesture Dataset and Montalbano Gesture Dataset and achieved state-of-the-art performance.



2018 ◽  
Vol 14 (7) ◽  
pp. 155014771879075 ◽  
Author(s):  
Kiwon Rhee ◽  
Hyun-Chool Shin

In the recognition of electromyogram-based hand gestures, the recognition accuracy may be degraded during the actual stage of practical applications for various reasons such as electrode positioning bias and different subjects. Besides these, the change in electromyogram signals due to different arm postures even for identical hand gestures is also an important issue. We propose an electromyogram-based hand gesture recognition technique robust to diverse arm postures. The proposed method uses both the signals of the accelerometer and electromyogram simultaneously to recognize correct hand gestures even for various arm postures. For the recognition of hand gestures, the electromyogram signals are statistically modeled considering the arm postures. In the experiments, we compared the cases that took into account the arm postures with the cases that disregarded the arm postures for the recognition of hand gestures. In the cases in which varied arm postures were disregarded, the recognition accuracy for correct hand gestures was 54.1%, whereas the cases using the method proposed in this study showed an 85.7% average recognition accuracy for hand gestures, an improvement of more than 31.6%. In this study, accelerometer and electromyogram signals were used simultaneously, which compensated the effect of different arm postures on the electromyogram signals and therefore improved the recognition accuracy of hand gestures.



2016 ◽  
Author(s):  
George Dimitriadis ◽  
Joana Neto ◽  
Adam R. Kampff

AbstractElectrophysiology is entering the era of ‘Big Data’. Multiple probes, each with hundreds to thousands of individual electrodes, are now capable of simultaneously recording from many brain regions. The major challenge confronting these new technologies is transforming the raw data into physiologically meaningful signals, i.e. single unit spikes. Sorting the spike events of individual neurons from a spatiotemporally dense sampling of the extracellular electric field is a problem that has attracted much attention [22, 23], but is still far from solved. Current methods still rely on human input and thus become unfeasible as the size of the data sets grow exponentially.Here we introduce the t-student stochastic neighbor embedding (t-sne) dimensionality reduction method [27] as a visualization tool in the spike sorting process. T-sne embeds the n-dimensional extracellular spikes (n = number of features by which each spike is decomposed) into a low (usually two) dimensional space. We show that such embeddings, even starting from different feature spaces, form obvious clusters of spikes that can be easily visualized and manually delineated with a high degree of precision. We propose that these clusters represent single units and test this assertion by applying our algorithm on labeled data sets both from hybrid [23] and paired juxtacellular/extracellular recordings [15]. We have released a graphical user interface (gui) written in python as a tool for the manual clustering of the t-sne embedded spikes and as a tool for an informed overview and fast manual curration of results from other clustering algorithms. Furthermore, the generated visualizations offer evidence in favor of the use of probes with higher density and smaller electrodes. They also graphically demonstrate the diverse nature of the sorting problem when spikes are recorded with different methods and arise from regions with different background spiking statistics.



Sign in / Sign up

Export Citation Format

Share Document