Leap Motion Hand Gesture Recognition Based on Deep Neural Network

Author(s):  
Qinglian Yang ◽  
Weikang Ding ◽  
Xingwen Zhou ◽  
Dongdong Zhao ◽  
Shi Yan
2021 ◽  
Vol 102 ◽  
pp. 04009
Author(s):  
Naoto Ageishi ◽  
Fukuchi Tomohide ◽  
Abderazek Ben Abdallah

Hand gestures are a kind of nonverbal communication in which visible bodily actions are used to communicate important messages. Recently, hand gesture recognition has received significant attention from the research community for various applications, including advanced driver assistance systems, prosthetic, and robotic control. Therefore, accurate and fast classification of hand gesture is required. In this research, we created a deep neural network as the first step to develop a real-time camera-only hand gesture recognition system without electroencephalogram (EEG) signals. We present the system software architecture in a fair amount of details. The proposed system was able to recognize hand signs with an accuracy of 97.31%.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3226
Author(s):  
Radu Mirsu ◽  
Georgiana Simion ◽  
Catalin Daniel Caleanu ◽  
Ioana Monica Pop-Calimanu

Gesture recognition is an intensively researched area for several reasons. One of the most important reasons is because of this technology’s numerous application in various domains (e.g., robotics, games, medicine, automotive, etc.) Additionally, the introduction of three-dimensional (3D) image acquisition techniques (e.g., stereovision, projected-light, time-of-flight, etc.) overcomes the limitations of traditional two-dimensional (2D) approaches. Combined with the larger availability of 3D sensors (e.g., Microsoft Kinect, Intel RealSense, photonic mixer device (PMD), CamCube, etc.), recent interest in this domain has sparked. Moreover, in many computer vision tasks, the traditional statistic top approaches were outperformed by deep neural network-based solutions. In view of these considerations, we proposed a deep neural network solution by employing PointNet architecture for the problem of hand gesture recognition using depth data produced by a time of flight (ToF) sensor. We created a custom hand gesture dataset, then proposed a multistage hand segmentation by designing filtering, clustering, and finding the hand in the volume of interest and hand-forearm segmentation. For comparison purpose, two equivalent datasets were tested: a 3D point cloud dataset and a 2D image dataset, both obtained from the same stream. Besides the advantages of the 3D technology, the accuracy of the 3D method using PointNet is proven to outperform the 2D method in all circumstances, even the 2D method that employs a deep neural network.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 2106 ◽  
Author(s):  
Linchu Yang ◽  
Ji’an Chen ◽  
Weihang Zhu

Dynamic hand gesture recognition is one of the most significant tools for human–computer interaction. In order to improve the accuracy of the dynamic hand gesture recognition, in this paper, a two-layer Bidirectional Recurrent Neural Network for the recognition of dynamic hand gestures from a Leap Motion Controller (LMC) is proposed. In addition, based on LMC, an efficient way to capture the dynamic hand gestures is identified. Dynamic hand gestures are represented by sets of feature vectors from the LMC. The proposed system has been tested on the American Sign Language (ASL) datasets with 360 samples and 480 samples, and the Handicraft-Gesture dataset, respectively. On the ASL dataset with 360 samples, the system achieves accuracies of 100% and 96.3% on the training and testing sets. On the ASL dataset with 480 samples, the system achieves accuracies of 100% and 95.2%. On the Handicraft-Gesture dataset, the system achieves accuracies of 100% and 96.7%. In addition, 5-fold, 10-fold, and Leave-One-Out cross-validation are performed on these datasets. The accuracies are 93.33%, 94.1%, and 98.33% (360 samples), 93.75%, 93.5%, and 98.13% (480 samples), and 88.66%, 90%, and 92% on ASL and Handicraft-Gesture datasets, respectively. The developed system demonstrates similar or better performance compared to other approaches in the literature.


Author(s):  
Muhammad Ilhamdi Rusydi ◽  
Syafii Syafii ◽  
Rizka Hadelina ◽  
Elmiyasna Kimin ◽  
Agung W. Setiawan ◽  
...  

Hand gesture recognition is a topic that is still investigated by many scientists for numerous useful aspects. This research investigated hand gestures for sign language number zero to nine. The hand gesture recognition was based on finger direction patterns. The finger directions were detected by a Leap Motion Controller. Finger direction pattern modeling was based on two methods: threshold and artificial neural network. Threshold model 1 contained 15 rules based on the range of finger directions on each axis. Threshold model 2 was developed from model 1 based on the behavior of finger movements when the subject performed hand gestures. The ANN model of the system was designed with four neurons at the output layer, 15 neurons at the input layer, seven neurons at the first hidden layer and 5 neurons at the second hidden layer. The artificial neural network used the logsig as the activation function. The result shows that the first threshold model has the lowest accuracy because the rule is too complicated and rigid. The threshold model 2 can improve the threshold model, but it still needs development to reach better accuracy. The ANN model gave the best result among the developed model with 98% accuracy. LMC produces useful biometric data for hand gesture recognition.


Sign in / Sign up

Export Citation Format

Share Document