Dynamic Hand Gesture Recognition Using LMC for Flower and Plant Interaction

Author(s):  
Long Liu ◽  
Yongjian Huai

As the recent novel somatosensory devices become more pervasive, dynamic hand gesture recognition algorithm has attracted substantial research attention and has been widely used in the area of human–computer interaction (HCI). This paper aims to develop low-complexity and real-time solutions of dynamic hand gesture recognition using Leap Motion Controller (LMC) for flower and plant interactive applications. In this paper, we use two LMCs to obtain gesture data from different angles for fusion processing and then propose a novel feature vector, which adapts to representing dynamic hand gestures. After this, an improved Hidden Markov Model (HMM) algorithm was proposed to obtain the final recognition results, in which we apply the Particle Swarm Optimization (PSO) to avoid the complex computation of parameters in conventional HMM, thus improving the recognition performance. The experimental results on test datasets demonstrate that the proposed algorithm can achieve a higher average recognition rate of 96.5% for Leap-Gesture and 97.3% for Manipulation-Gesture. In addition, through the experiment of a flower and plant interaction, our dynamic gesture recognition solution can help users realize the interactive operation accurately and efficiently. In contrast to previous studies, our prototype system provides the users with a new dimension of experience and changes the research model of traditional forestry.

Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 2106 ◽  
Author(s):  
Linchu Yang ◽  
Ji’an Chen ◽  
Weihang Zhu

Dynamic hand gesture recognition is one of the most significant tools for human–computer interaction. In order to improve the accuracy of the dynamic hand gesture recognition, in this paper, a two-layer Bidirectional Recurrent Neural Network for the recognition of dynamic hand gestures from a Leap Motion Controller (LMC) is proposed. In addition, based on LMC, an efficient way to capture the dynamic hand gestures is identified. Dynamic hand gestures are represented by sets of feature vectors from the LMC. The proposed system has been tested on the American Sign Language (ASL) datasets with 360 samples and 480 samples, and the Handicraft-Gesture dataset, respectively. On the ASL dataset with 360 samples, the system achieves accuracies of 100% and 96.3% on the training and testing sets. On the ASL dataset with 480 samples, the system achieves accuracies of 100% and 95.2%. On the Handicraft-Gesture dataset, the system achieves accuracies of 100% and 96.7%. In addition, 5-fold, 10-fold, and Leave-One-Out cross-validation are performed on these datasets. The accuracies are 93.33%, 94.1%, and 98.33% (360 samples), 93.75%, 93.5%, and 98.13% (480 samples), and 88.66%, 90%, and 92% on ASL and Handicraft-Gesture datasets, respectively. The developed system demonstrates similar or better performance compared to other approaches in the literature.


2020 ◽  
Vol 17 (4) ◽  
pp. 497-506
Author(s):  
Sunil Patel ◽  
Ramji Makwana

Automatic classification of dynamic hand gesture is challenging due to the large diversity in a different class of gesture, Low resolution, and it is performed by finger. Due to a number of challenges many researchers focus on this area. Recently deep neural network can be used for implicit feature extraction and Soft Max layer is used for classification. In this paper, we propose a method based on a two-dimensional convolutional neural network that performs detection and classification of hand gesture simultaneously from multimodal Red, Green, Blue, Depth (RGBD) and Optical flow Data and passes this feature to Long-Short Term Memory (LSTM) recurrent network for frame-to-frame probability generation with Connectionist Temporal Classification (CTC) network for loss calculation. We have calculated an optical flow from Red, Green, Blue (RGB) data for getting proper motion information present in the video. CTC model is used to efficiently evaluate all possible alignment of hand gesture via dynamic programming and check consistency via frame-to-frame for the visual similarity of hand gesture in the unsegmented input stream. CTC network finds the most probable sequence of a frame for a class of gesture. The frame with the highest probability value is selected from the CTC network by max decoding. This entire CTC network is trained end-to-end with calculating CTC loss for recognition of the gesture. We have used challenging Vision for Intelligent Vehicles and Applications (VIVA) dataset for dynamic hand gesture recognition captured with RGB and Depth data. On this VIVA dataset, our proposed hand gesture recognition technique outperforms competing state-of-the-art algorithms and gets an accuracy of 86%


2015 ◽  
Vol 75 (22) ◽  
pp. 14991-15015 ◽  
Author(s):  
Giulio Marin ◽  
Fabio Dominio ◽  
Pietro Zanuttigh

Sign in / Sign up

Export Citation Format

Share Document