Live Demonstration: Event-Driven Hand Gesture Recognition for Wearable Human-Machine Interface

Author(s):  
Martina Becchio ◽  
Niccolo Voster ◽  
Andrea Prestia ◽  
Andrea Mongardi ◽  
Fabio Rossi ◽  
...  
Author(s):  
Shweta K. Yewale ◽  
Pankaj. K. Bharne

Gesture is one of the most natural and expressive ways of communications between human and computer in a real system. We naturally use various gestures to express our own intentions in everyday life. Hand gesture is one of the important methods of non-verbal communication for human beings. Hand gesture recognition based man-machine interface is being developed vigorously in recent years. This paper gives an overview of different methods for recognizing the hand gestures using MATLAB. It also gives the working details of recognition process using Edge detection and Skin detection algorithms.


2020 ◽  
Vol 05 (01n02) ◽  
pp. 2041001 ◽  
Author(s):  
Elahe Rahimian ◽  
Soheil Zabihi ◽  
Seyed Farokh Atashzar ◽  
Amir Asif ◽  
Arash Mohammadi

Motivated by the potentials of deep learning models in significantly improving myoelectric control of neuroprosthetic robotic limbs, this paper proposes two novel deep learning architectures, namely the [Formula: see text] ([Formula: see text]) and the [Formula: see text] ([Formula: see text]), for performing Hand Gesture Recognition (HGR) via multi-channel surface Electromyography (sEMG) signals. The work is aimed at enhancing the accuracy of myoelectric systems, which can be used for realizing an accurate and resilient man–machine interface for myocontrol of neurorobotic systems. The HRM is developed based on an innovative, unconventional, and particular hybridization of two parallel paths (one convolutional and one recurrent) coupled via a fully-connected multilayer network acting as the fusion center providing robustness across different scenarios. The hybrid design is specifically proposed to treat temporal and spatial features in two parallel processing pipelines and to augment the discriminative power of the model to reduce the required computational complexity and construct a compact HGR model. We designed a second architecture, the [Formula: see text], as a compact architecture. It is worth mentioning that efficiency of a designed deep model, especially its memory usage and number of parameters, is as important as its achievable accuracy in practice. The [Formula: see text] has significantly less memory requirement in training when compared to the HRM due to implementation of novel dilated causal convolutions that gradually increase the receptive field of the network and utilize shared filter parameters. The NinaPro DB2 dataset is utilized for evaluation purposes. The proposed [Formula: see text] significantly outperforms its counterparts achieving an exceptionally-high HGR performance of [Formula: see text]%. The TCNM with the accuracy of [Formula: see text]% also outperforms existing solutions while maintaining low computational requirements.


Author(s):  
Stefano Sapienza ◽  
Paolo Motto Ros ◽  
David Alejandro Fernandez Guzman ◽  
Fabio Rossi ◽  
Rossana Terracciano ◽  
...  

2020 ◽  
Vol 17 (4) ◽  
pp. 497-506
Author(s):  
Sunil Patel ◽  
Ramji Makwana

Automatic classification of dynamic hand gesture is challenging due to the large diversity in a different class of gesture, Low resolution, and it is performed by finger. Due to a number of challenges many researchers focus on this area. Recently deep neural network can be used for implicit feature extraction and Soft Max layer is used for classification. In this paper, we propose a method based on a two-dimensional convolutional neural network that performs detection and classification of hand gesture simultaneously from multimodal Red, Green, Blue, Depth (RGBD) and Optical flow Data and passes this feature to Long-Short Term Memory (LSTM) recurrent network for frame-to-frame probability generation with Connectionist Temporal Classification (CTC) network for loss calculation. We have calculated an optical flow from Red, Green, Blue (RGB) data for getting proper motion information present in the video. CTC model is used to efficiently evaluate all possible alignment of hand gesture via dynamic programming and check consistency via frame-to-frame for the visual similarity of hand gesture in the unsegmented input stream. CTC network finds the most probable sequence of a frame for a class of gesture. The frame with the highest probability value is selected from the CTC network by max decoding. This entire CTC network is trained end-to-end with calculating CTC loss for recognition of the gesture. We have used challenging Vision for Intelligent Vehicles and Applications (VIVA) dataset for dynamic hand gesture recognition captured with RGB and Depth data. On this VIVA dataset, our proposed hand gesture recognition technique outperforms competing state-of-the-art algorithms and gets an accuracy of 86%


Sign in / Sign up

Export Citation Format

Share Document