Model-based articulated hand motion tracking for gesture recognition

1998 ◽  
Vol 16 (2) ◽  
pp. 121-134 ◽  
Author(s):  
Cheng-Chang Lien ◽  
Chung-Lin Huang
Author(s):  
Jen-Hsuan Hsiao ◽  
Yu-Heng Deng ◽  
Tsung-Ying Pao ◽  
Hsin-Rung Chou ◽  
Jen-Yuan (James) Chang

Hand motion tracking and gesture recognition are of crucial interest to the development of virtual reality systems and controllers. In this paper, a wireless data glove that can accurately sense hands’ dynamic movements and gestures of different modes was proposed. This data glove was custom-built, consisting of flex and inertial sensors, and a microcontroller with multi-channel ADC (analog to digital converter). For the classification algorithm, a hierarchical gesture system using Naïve Bayes Classifier was built. This low training time recognition algorithm allows categorization of all input signals, such as clicking, pointing, dragging, rotating and switching functions when performing computer control. This glove provided a more intuitive way to operate with human-computer interface. Some preliminary experimental results were presented in this paper. The data glove was also operated as a controller in a First-Person Shooter (FPS) game to perform the usability of the proposed glove.


Author(s):  
Hansol Rheem ◽  
David V. Becker ◽  
Scotty D. Craig
Keyword(s):  

Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 534
Author(s):  
Huogen Wang

The paper proposes an effective continuous gesture recognition method, which includes two modules: segmentation and recognition. In the segmentation module, the video frames are divided into gesture frames and transitional frames by using the information of hand motion and appearance, and continuous gesture sequences are segmented into isolated sequences. In the recognition module, our method exploits the spatiotemporal information embedded in RGB and depth sequences. For the RGB modality, our method adopts Convolutional Long Short-Term Memory Networks to learn long-term spatiotemporal features from short-term spatiotemporal features obtained from a 3D convolutional neural network. For the depth modality, our method converts a sequence into Dynamic Images and Motion Dynamic Images through weighted rank pooling and feed them into Convolutional Neural Networks, respectively. Our method has been evaluated on both ChaLearn LAP Large-scale Continuous Gesture Dataset and Montalbano Gesture Dataset and achieved state-of-the-art performance.


Author(s):  
JAMES DAVIS ◽  
MUBARAK SHAH

This paper presents a glove-free method for tracking hand movements using a set of 3-D models. In this approach, the hand is represented by five cylindrical models which are fit to the third phalangeal segments of the fingers. Six 3-D motion parameters for each model are calculated that correspond to the movement of the fingertips in the image plane. Trajectories of the moving models are then established to show the 3-D nature of the hand motion.


Sign in / Sign up

Export Citation Format

Share Document