scholarly journals A Robust Hand Silhouette Orientation Detection Method for Hand Gesture Recognition

2021 ◽  
Vol 56 (3) ◽  
pp. 43-52
Author(s):  
Andi Sunyoto

The computer vision approach is most widely used for research related to hand gesture recognition. The detection of the image orientation has been discovered to be one of the keys to determine its success. The degree of freedom for a hand determines the shape and orientation of a gesture, which further causes a problem in the recognition methods. This article proposed evaluating orientation detection for silhouette static hand gestures with different poses and orientations without considering the forearm. The longest chord and ellipse were the two popular methods compared. The angles formed from two wrist points were selected as ground truth data and calculated from the horizontal axis. The performance was analyzed using the error values obtained from the difference in ground truth data angles compared to the method's results. The method has errors closer to zero that were rated better. Moreover, the method was evaluated using 1187 images, divided into four groups based on the forearm presence, and the results showed its effect on orientation detection. It was also discovered that the ellipse method was better than the longest chord. This study's results are used to select hand gesture orientation detection to increase accuracy in the hand gesture recognition process.

2020 ◽  
Vol 17 (4) ◽  
pp. 497-506
Author(s):  
Sunil Patel ◽  
Ramji Makwana

Automatic classification of dynamic hand gesture is challenging due to the large diversity in a different class of gesture, Low resolution, and it is performed by finger. Due to a number of challenges many researchers focus on this area. Recently deep neural network can be used for implicit feature extraction and Soft Max layer is used for classification. In this paper, we propose a method based on a two-dimensional convolutional neural network that performs detection and classification of hand gesture simultaneously from multimodal Red, Green, Blue, Depth (RGBD) and Optical flow Data and passes this feature to Long-Short Term Memory (LSTM) recurrent network for frame-to-frame probability generation with Connectionist Temporal Classification (CTC) network for loss calculation. We have calculated an optical flow from Red, Green, Blue (RGB) data for getting proper motion information present in the video. CTC model is used to efficiently evaluate all possible alignment of hand gesture via dynamic programming and check consistency via frame-to-frame for the visual similarity of hand gesture in the unsegmented input stream. CTC network finds the most probable sequence of a frame for a class of gesture. The frame with the highest probability value is selected from the CTC network by max decoding. This entire CTC network is trained end-to-end with calculating CTC loss for recognition of the gesture. We have used challenging Vision for Intelligent Vehicles and Applications (VIVA) dataset for dynamic hand gesture recognition captured with RGB and Depth data. On this VIVA dataset, our proposed hand gesture recognition technique outperforms competing state-of-the-art algorithms and gets an accuracy of 86%


2020 ◽  
Vol 29 (6) ◽  
pp. 1153-1164
Author(s):  
Qianyi Xu ◽  
Guihe Qin ◽  
Minghui Sun ◽  
Jie Yan ◽  
Huiming Jiang ◽  
...  

2021 ◽  
pp. 108044
Author(s):  
Fangtai Guo ◽  
Zaixing He ◽  
Shuyou Zhang ◽  
Xinyue Zhao ◽  
Jinhui Fang ◽  
...  

Author(s):  
Sruthy Skaria ◽  
Da Huang ◽  
Akram Al-Hourani ◽  
Robin J. Evans ◽  
Margaret Lech

Sign in / Sign up

Export Citation Format

Share Document