scholarly journals Finger Gesture Recognition Using Sensing and Classification of Surface Electromyography Signals With High-Precision Wireless Surface Electromyography Sensors

2021 ◽  
Vol 15 ◽  
Author(s):  
Jianting Fu ◽  
Shizhou Cao ◽  
Linqin Cai ◽  
Lechan Yang

Finger gesture recognition (FGR) plays a crucial role in achieving, for example, artificial limb control and human-computer interaction. Currently, the most common methods of FGR are visual-based, voice-based, and surface electromyography (EMG)-based ones. Among them, surface EMG-based FGR is very popular and successful because surface EMG is a cumulative bioelectric signal from the surface of the skin that can accurately and intuitively represent the force of the fingers. However, existing surface EMG-based methods still cannot fully satisfy the required recognition accuracy for artificial limb control as the lack of high-precision sensor and high-accurate recognition model. To address this issue, this study proposes a novel FGR model that consists of sensing and classification of surface EMG signals (SC-FGR). In the proposed SC-FGR model, wireless sensors with high-precision surface EMG are first developed for acquiring multichannel surface EMG signals from the forearm. Its resolution is 16 Bits, the sampling rate is 2 kHz, the common-mode rejection ratio (CMRR) is less than 70 dB, and the short-circuit noise (SCN) is less than 1.5 μV. In addition, a convolution neural network (CNN)-based classification algorithm is proposed to achieve FGR based on acquired surface EMG signals. The CNN is trained on a spectrum map transformed from the time-domain surface EMG by continuous wavelet transform (CWT). To evaluate the proposed SC-FGR model, we compared it with seven state-of-the-art models. The experimental results demonstrate that SC-FGR achieves 97.5% recognition accuracy on eight kinds of finger gestures with five subjects, which is much higher than that of comparable models.

2017 ◽  
Vol 4 ◽  
pp. 205566831770873 ◽  
Author(s):  
Joe Sanford ◽  
Rita Patterson ◽  
Dan O Popa

Objective Surface electromyography has been a long-standing source of signals for control of powered prosthetic devices. By contrast, force myography is a more recent alternative to surface electromyography that has the potential to enhance reliability and avoid operational challenges of surface electromyography during use. In this paper, we report on experiments conducted to assess improvements in classification of surface electromyography signals through the addition of collocated force myography consisting of piezo-resistive sensors. Methods Force sensors detect intrasocket pressure changes upon muscle activation due to changes in muscle volume during activities of daily living. A heterogeneous sensor configuration with four surface electromyography–force myography pairs was investigated as a control input for a powered upper limb prosthetic. Training of two different multilevel neural perceptron networks was employed during classification and trained on data gathered during experiments simulating socket shift and muscle fatigue. Results Results indicate that intrasocket pressure data used in conjunction with surface EMG data can improve classification of human intent and control of a powered prosthetic device compared to traditional, surface electromyography only systems. Significance Additional sensors lead to significantly better signal classification during times of user fatigue, poor socket fit, as well as radial and ulnar wrist deviation. Results from experimentally obtained training data sets are presented.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Wentao Wei ◽  
Hong Hong ◽  
Xiaoli Wu

Hand gesture recognition based on surface electromyography (sEMG) plays an important role in the field of biomedical and rehabilitation engineering. Recently, there is a remarkable progress in gesture recognition using high-density surface electromyography (HD-sEMG) recorded by sensor arrays. On the other hand, robust gesture recognition using multichannel sEMG recorded by sparsely placed sensors remains a major challenge. In the context of multiview deep learning, this paper presents a hierarchical view pooling network (HVPN) framework, which improves multichannel sEMG-based gesture recognition by learning not only view-specific deep features but also view-shared deep features from hierarchically pooled multiview feature spaces. Extensive intrasubject and intersubject evaluations were conducted on the large-scale noninvasive adaptive prosthetics (NinaPro) database to comprehensively evaluate our proposed HVPN framework. Results showed that when using 200 ms sliding windows to segment data, the proposed HVPN framework could achieve the intrasubject gesture recognition accuracy of 88.4%, 85.8%, 68.2%, 72.9%, and 90.3% and the intersubject gesture recognition accuracy of 84.9%, 82.0%, 65.6%, 70.2%, and 88.9% on the first five subdatabases of NinaPro, respectively, which outperformed the state-of-the-art methods.


2016 ◽  
Vol 136 (8) ◽  
pp. 1120-1127 ◽  
Author(s):  
Naoya Ikemoto ◽  
Kenji Terada ◽  
Yuta Takashina ◽  
Akio Nakano

2020 ◽  
Vol 17 (4) ◽  
pp. 497-506
Author(s):  
Sunil Patel ◽  
Ramji Makwana

Automatic classification of dynamic hand gesture is challenging due to the large diversity in a different class of gesture, Low resolution, and it is performed by finger. Due to a number of challenges many researchers focus on this area. Recently deep neural network can be used for implicit feature extraction and Soft Max layer is used for classification. In this paper, we propose a method based on a two-dimensional convolutional neural network that performs detection and classification of hand gesture simultaneously from multimodal Red, Green, Blue, Depth (RGBD) and Optical flow Data and passes this feature to Long-Short Term Memory (LSTM) recurrent network for frame-to-frame probability generation with Connectionist Temporal Classification (CTC) network for loss calculation. We have calculated an optical flow from Red, Green, Blue (RGB) data for getting proper motion information present in the video. CTC model is used to efficiently evaluate all possible alignment of hand gesture via dynamic programming and check consistency via frame-to-frame for the visual similarity of hand gesture in the unsegmented input stream. CTC network finds the most probable sequence of a frame for a class of gesture. The frame with the highest probability value is selected from the CTC network by max decoding. This entire CTC network is trained end-to-end with calculating CTC loss for recognition of the gesture. We have used challenging Vision for Intelligent Vehicles and Applications (VIVA) dataset for dynamic hand gesture recognition captured with RGB and Depth data. On this VIVA dataset, our proposed hand gesture recognition technique outperforms competing state-of-the-art algorithms and gets an accuracy of 86%


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 222
Author(s):  
Tao Li ◽  
Chenqi Shi ◽  
Peihao Li ◽  
Pengpeng Chen

In this paper, we propose a novel gesture recognition system based on a smartphone. Due to the limitation of Channel State Information (CSI) extraction equipment, existing WiFi-based gesture recognition is limited to the microcomputer terminal equipped with Intel 5300 or Atheros 9580 network cards. Therefore, accurate gesture recognition can only be performed in an area relatively fixed to the transceiver link. The new gesture recognition system proposed by us breaks this limitation. First, we use nexmon firmware to obtain 256 CSI subcarriers from the bottom layer of the smartphone in IEEE 802.11ac mode on 80 MHz bandwidth to realize the gesture recognition system’s mobility. Second, we adopt the cross-correlation method to integrate the extracted CSI features in the time and frequency domain to reduce the influence of changes in the smartphone location. Third, we use a new improved DTW algorithm to classify and recognize gestures. We implemented vast experiments to verify the system’s recognition accuracy at different distances in different directions and environments. The results show that the system can effectively improve the recognition accuracy.


Author(s):  
Chaoqing Wang ◽  
Junlong Cheng ◽  
Yuefei Wang ◽  
Yurong Qian

A vehicle make and model recognition (VMMR) system is a common requirement in the field of intelligent transportation systems (ITS). However, it is a challenging task because of the subtle differences between vehicle categories. In this paper, we propose a hierarchical scheme for VMMR. Specifically, the scheme consists of (1) a feature extraction framework called weighted mask hierarchical bilinear pooling (WMHBP) based on hierarchical bilinear pooling (HBP) which weakens the influence of invalid background regions by generating a weighted mask while extracting features from discriminative regions to form a more robust feature descriptor; (2) a hierarchical loss function that can learn the appearance differences between vehicle brands, and enhance vehicle recognition accuracy; (3) collection of vehicle images from the Internet and classification of images with hierarchical labels to augment data for solving the problem of insufficient data and low picture resolution and improving the model’s generalization ability and robustness. We evaluate the proposed framework for accuracy and real-time performance and the experiment results indicate a recognition accuracy of 95.1% and an FPS (frames per second) of 107 for the framework for the Stanford Cars public dataset, which demonstrates the superiority of the method and its availability for ITS.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2540
Author(s):  
Zhipeng Yu ◽  
Jianghai Zhao ◽  
Yucheng Wang ◽  
Linglong He ◽  
Shaonan Wang

In recent years, surface electromyography (sEMG)-based human–computer interaction has been developed to improve the quality of life for people. Gesture recognition based on the instantaneous values of sEMG has the advantages of accurate prediction and low latency. However, the low generalization ability of the hand gesture recognition method limits its application to new subjects and new hand gestures, and brings a heavy training burden. For this reason, based on a convolutional neural network, a transfer learning (TL) strategy for instantaneous gesture recognition is proposed to improve the generalization performance of the target network. CapgMyo and NinaPro DB1 are used to evaluate the validity of our proposed strategy. Compared with the non-transfer learning (non-TL) strategy, our proposed strategy improves the average accuracy of new subject and new gesture recognition by 18.7% and 8.74%, respectively, when up to three repeated gestures are employed. The TL strategy reduces the training time by a factor of three. Experiments verify the transferability of spatial features and the validity of the proposed strategy in improving the recognition accuracy of new subjects and new gestures, and reducing the training burden. The proposed TL strategy provides an effective way of improving the generalization ability of the gesture recognition system.


2021 ◽  
Vol 11 (11) ◽  
pp. 4922
Author(s):  
Tengfei Ma ◽  
Wentian Chen ◽  
Xin Li ◽  
Yuting Xia ◽  
Xinhua Zhu ◽  
...  

To explore whether the brain contains pattern differences in the rock–paper–scissors (RPS) imagery task, this paper attempts to classify this task using fNIRS and deep learning. In this study, we designed an RPS task with a total duration of 25 min and 40 s, and recruited 22 volunteers for the experiment. We used the fNIRS acquisition device (FOIRE-3000) to record the cerebral neural activities of these participants in the RPS task. The time series classification (TSC) algorithm was introduced into the time-domain fNIRS signal classification. Experiments show that CNN-based TSC methods can achieve 97% accuracy in RPS classification. CNN-based TSC method is suitable for the classification of fNIRS signals in RPS motor imagery tasks, and may find new application directions for the development of brain–computer interfaces (BCI).


Sign in / Sign up

Export Citation Format

Share Document