Improving sEMG-Based Hand Gesture Recognition Using Maximal Overlap Discrete Wavelet Transform and an Autoencoder Neural Network

Author(s):  
Fernando Henrique Cruz de Andrade ◽  
Flavio Garcia Pereira ◽  
Cassius Zanetti Resende ◽  
Daniel Cruz Cavalieri
2020 ◽  
Vol 9 (3) ◽  
pp. 996-1004 ◽  
Author(s):  
Muhammad Biyan Priatama ◽  
Ledya Novamizanti ◽  
Suci Aulia ◽  
Erizka Banuwati Candrasari

Public services are available to all communities including people with disabilities. One obstacle that impedes persons with disabilities from participating in various community activities and enjoying the various public services available to the community is information and communication barriers. One way to communicate with people with disabilities is with hand gestures. Therefore, the hand gesture technology is needed, in order to facilitate the public to interact with the disability. This study proposes a reliable hand gesture recognition system using the convolutional neural network method. The first step, carried out pre-processing, to separate the foreground and background. Then the foreground is transformed using the discrete wavelet transform (DWT) to take the most significant subband. The last step is image classification with convolutional neural network. The amount of training and test data used are 400 and 100 images repectively, containing five classes namely class A, B, C, # 5, and pointing. This study engendered a hand gesture recognition system that had an accuracy of 100% for dataset A and 90% for dataset B.


2018 ◽  
Vol 5 (1) ◽  
pp. 41-46
Author(s):  
Rosalina Rosalina ◽  
Hendra Jayanto

The aim of this paper is to get high accuracy of stock market forecasting in order to produce signals that will affect the decision making in the trading itself. Several experiments by using different methodologies have been performed to answer the stock market forecasting issues. A traditional linear model, like autoregressive integrated moving average (ARIMA) has been used, but the result is not satisfactory because it is not suitable for model financial series. Yet experts are likely observed another approach by using artificial neural networks. Artificial neural network (ANN) are found to be more effective in realizing the input-output mapping and could estimate any continuous function which given an arbitrarily desired accuracy. In details, in this paper will use maximal overlap discrete wavelet transform (MODWT) and graph theory to distinguish and determine between low and high frequencies, which in this case acted as fundamental and technical prediction of stock market trading. After processed dataset is formed, then we will advance to the next level of the training process to generate the final result that is the buy or sell signals given from information whether the stock price will go up or down.


2020 ◽  
Vol 17 (4) ◽  
pp. 497-506
Author(s):  
Sunil Patel ◽  
Ramji Makwana

Automatic classification of dynamic hand gesture is challenging due to the large diversity in a different class of gesture, Low resolution, and it is performed by finger. Due to a number of challenges many researchers focus on this area. Recently deep neural network can be used for implicit feature extraction and Soft Max layer is used for classification. In this paper, we propose a method based on a two-dimensional convolutional neural network that performs detection and classification of hand gesture simultaneously from multimodal Red, Green, Blue, Depth (RGBD) and Optical flow Data and passes this feature to Long-Short Term Memory (LSTM) recurrent network for frame-to-frame probability generation with Connectionist Temporal Classification (CTC) network for loss calculation. We have calculated an optical flow from Red, Green, Blue (RGB) data for getting proper motion information present in the video. CTC model is used to efficiently evaluate all possible alignment of hand gesture via dynamic programming and check consistency via frame-to-frame for the visual similarity of hand gesture in the unsegmented input stream. CTC network finds the most probable sequence of a frame for a class of gesture. The frame with the highest probability value is selected from the CTC network by max decoding. This entire CTC network is trained end-to-end with calculating CTC loss for recognition of the gesture. We have used challenging Vision for Intelligent Vehicles and Applications (VIVA) dataset for dynamic hand gesture recognition captured with RGB and Depth data. On this VIVA dataset, our proposed hand gesture recognition technique outperforms competing state-of-the-art algorithms and gets an accuracy of 86%


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2540
Author(s):  
Zhipeng Yu ◽  
Jianghai Zhao ◽  
Yucheng Wang ◽  
Linglong He ◽  
Shaonan Wang

In recent years, surface electromyography (sEMG)-based human–computer interaction has been developed to improve the quality of life for people. Gesture recognition based on the instantaneous values of sEMG has the advantages of accurate prediction and low latency. However, the low generalization ability of the hand gesture recognition method limits its application to new subjects and new hand gestures, and brings a heavy training burden. For this reason, based on a convolutional neural network, a transfer learning (TL) strategy for instantaneous gesture recognition is proposed to improve the generalization performance of the target network. CapgMyo and NinaPro DB1 are used to evaluate the validity of our proposed strategy. Compared with the non-transfer learning (non-TL) strategy, our proposed strategy improves the average accuracy of new subject and new gesture recognition by 18.7% and 8.74%, respectively, when up to three repeated gestures are employed. The TL strategy reduces the training time by a factor of three. Experiments verify the transferability of spatial features and the validity of the proposed strategy in improving the recognition accuracy of new subjects and new gestures, and reducing the training burden. The proposed TL strategy provides an effective way of improving the generalization ability of the gesture recognition system.


Sign in / Sign up

Export Citation Format

Share Document