scholarly journals Investigation on the Sampling Frequency and Channel Number for Force Myography Based Hand Gesture Recognition

Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3872
Author(s):  
Guangtai Lei ◽  
Shenyilang Zhang ◽  
Yinfeng Fang ◽  
Yuxi Wang ◽  
Xuguang Zhang

Force myography (FMG) is a method that uses pressure sensors to measure muscle contraction indirectly. Compared with the conventional approach utilizing myoelectric signals in hand gesture recognition, it is a valuable substitute. To achieve the aim of gesture recognition at minimum cost, it is necessary to study the minimum sampling frequency and the minimal number of channels. For purpose of investigating the effect of sampling frequency and the number of channels on the accuracy of gesture recognition, a hardware system that has 16 channels has been designed for capturing forearm FMG signals with a maximum sampling frequency of 1 kHz. Using this acquisition equipment, a force myography database containing 10 subjects’ data has been created. In this paper, gesture accuracies under different sampling frequencies and channel’s number are obtained. Under 1 kHz sampling rate and 16 channels, four of five tested classifiers reach an accuracy up to about 99%. Other experimental results indicate that: (1) the sampling frequency of the FMG signal can be as low as 5 Hz for the recognition of static movements; (2) the reduction of channel number has a large impact on the accuracy, and the suggested channel number for gesture recognition is eight; and (3) the distribution of the sensors on the forearm would affect the recognition accuracy, and it is possible to improve the accuracy via optimizing the sensor position.

2020 ◽  
Vol 17 (4) ◽  
pp. 497-506
Author(s):  
Sunil Patel ◽  
Ramji Makwana

Automatic classification of dynamic hand gesture is challenging due to the large diversity in a different class of gesture, Low resolution, and it is performed by finger. Due to a number of challenges many researchers focus on this area. Recently deep neural network can be used for implicit feature extraction and Soft Max layer is used for classification. In this paper, we propose a method based on a two-dimensional convolutional neural network that performs detection and classification of hand gesture simultaneously from multimodal Red, Green, Blue, Depth (RGBD) and Optical flow Data and passes this feature to Long-Short Term Memory (LSTM) recurrent network for frame-to-frame probability generation with Connectionist Temporal Classification (CTC) network for loss calculation. We have calculated an optical flow from Red, Green, Blue (RGB) data for getting proper motion information present in the video. CTC model is used to efficiently evaluate all possible alignment of hand gesture via dynamic programming and check consistency via frame-to-frame for the visual similarity of hand gesture in the unsegmented input stream. CTC network finds the most probable sequence of a frame for a class of gesture. The frame with the highest probability value is selected from the CTC network by max decoding. This entire CTC network is trained end-to-end with calculating CTC loss for recognition of the gesture. We have used challenging Vision for Intelligent Vehicles and Applications (VIVA) dataset for dynamic hand gesture recognition captured with RGB and Depth data. On this VIVA dataset, our proposed hand gesture recognition technique outperforms competing state-of-the-art algorithms and gets an accuracy of 86%


2020 ◽  
Vol 29 (6) ◽  
pp. 1153-1164
Author(s):  
Qianyi Xu ◽  
Guihe Qin ◽  
Minghui Sun ◽  
Jie Yan ◽  
Huiming Jiang ◽  
...  

2021 ◽  
pp. 108044
Author(s):  
Fangtai Guo ◽  
Zaixing He ◽  
Shuyou Zhang ◽  
Xinyue Zhao ◽  
Jinhui Fang ◽  
...  

Author(s):  
Sruthy Skaria ◽  
Da Huang ◽  
Akram Al-Hourani ◽  
Robin J. Evans ◽  
Margaret Lech

Sign in / Sign up

Export Citation Format

Share Document