User adaptive hand gesture recognition using multivariate fuzzy decision tree and fuzzy garbage model

Author(s):  
Moon-Jin Jeon ◽  
Seung-Eun Yang ◽  
Zeungnam Bien
2011 ◽  
Vol 1 (3) ◽  
pp. 15-31 ◽  
Author(s):  
Moon-Jin Jeon ◽  
Sang Wan Lee ◽  
Zeungnam Bien

As an emerging human-computer interaction (HCI) technology, recognition of human hand gesture is considered a very powerful means for human intention reading. To construct a system with a reliable and robust hand gesture recognition algorithm, it is necessary to resolve several major difficulties of hand gesture recognition, such as inter-person variation, intra-person variation, and false positive error caused by meaningless hand gestures. This paper proposes a learning algorithm and also a classification technique, based on multivariate fuzzy decision tree (MFDT). Efficient control of a fuzzified decision boundary in the MFDT leads to reduction of intra-person variation, while proper selection of a user dependent (UD) recognition model contributes to minimization of inter-person variation. The proposed method is tested first by using two benchmark data sets in UCI Machine Learning Repository and then by a hand gesture data set obtained from 10 people for 15 days. The experimental results show a discernibly enhanced classification performance as well as user adaptation capability of the proposed algorithm.


Author(s):  
Moon-Jin Jeon ◽  
Sang Wan Lee ◽  
Zeungnam Bien

As an emerging human-computer interaction (HCI) technology, recognition of human hand gesture is considered a very powerful means for human intention reading. To construct a system with a reliable and robust hand gesture recognition algorithm, it is necessary to resolve several major difficulties of hand gesture recognition, such as inter-person variation, intra-person variation, and false positive error caused by meaningless hand gestures. This paper proposes a learning algorithm and also a classification technique, based on multivariate fuzzy decision tree (MFDT). Efficient control of a fuzzified decision boundary in the MFDT leads to reduction of intra-person variation, while proper selection of a user dependent (UD) recognition model contributes to minimization of inter-person variation. The proposed method is tested first by using two benchmark data sets in UCI Machine Learning Repository and then by a hand gesture data set obtained from 10 people for 15 days. The experimental results show a discernibly enhanced classification performance as well as user adaptation capability of the proposed algorithm.


2013 ◽  
Vol 16 (12) ◽  
pp. 1393-1402 ◽  
Author(s):  
Guochao Chang ◽  
Jaewan Park ◽  
Chimin Oh ◽  
Chilwoo Lee

2020 ◽  
Vol 17 (4) ◽  
pp. 497-506
Author(s):  
Sunil Patel ◽  
Ramji Makwana

Automatic classification of dynamic hand gesture is challenging due to the large diversity in a different class of gesture, Low resolution, and it is performed by finger. Due to a number of challenges many researchers focus on this area. Recently deep neural network can be used for implicit feature extraction and Soft Max layer is used for classification. In this paper, we propose a method based on a two-dimensional convolutional neural network that performs detection and classification of hand gesture simultaneously from multimodal Red, Green, Blue, Depth (RGBD) and Optical flow Data and passes this feature to Long-Short Term Memory (LSTM) recurrent network for frame-to-frame probability generation with Connectionist Temporal Classification (CTC) network for loss calculation. We have calculated an optical flow from Red, Green, Blue (RGB) data for getting proper motion information present in the video. CTC model is used to efficiently evaluate all possible alignment of hand gesture via dynamic programming and check consistency via frame-to-frame for the visual similarity of hand gesture in the unsegmented input stream. CTC network finds the most probable sequence of a frame for a class of gesture. The frame with the highest probability value is selected from the CTC network by max decoding. This entire CTC network is trained end-to-end with calculating CTC loss for recognition of the gesture. We have used challenging Vision for Intelligent Vehicles and Applications (VIVA) dataset for dynamic hand gesture recognition captured with RGB and Depth data. On this VIVA dataset, our proposed hand gesture recognition technique outperforms competing state-of-the-art algorithms and gets an accuracy of 86%


2020 ◽  
Vol 29 (6) ◽  
pp. 1153-1164
Author(s):  
Qianyi Xu ◽  
Guihe Qin ◽  
Minghui Sun ◽  
Jie Yan ◽  
Huiming Jiang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document