Hybrid Approach Based on Machine Learning for Hand Shape and Key Point’s Estimation

Author(s):  
Chen Zhongshan ◽  
Feng Xinning ◽  
Oscar Sanjuán Martínez ◽  
Rubén González Crespo

In human-computer interaction and virtual truth, hand pose estimation is essential. Public dataset experimental analysis Different biometric shows that a particular system creates low manual estimation errors and has a more significant opportunity for new hand pose estimation activity. Due to the fluctuations, self-occlusion, and specific modulations, the structure of hand photographs is quite tricky. Hence, this paper proposes a Hybrid approach based on machine learning (HABoML) to enhance the current competitiveness, performance experience, experimental hand shape, and key point estimation analysis. In terms of strengthening the ability to make better self-occlusion adjustments and special handshake and poses estimations, the machine learning algorithm is combined with a hybrid approach. The experiment results helped define a set of follow-up experiments for the proposed systems in this field, which had a high efficiency and performance level. The HABoML strategy decreased analysis precision by 9.33% and is a better solution.

2009 ◽  
Vol 21 (6) ◽  
pp. 739-748 ◽  
Author(s):  
Albert Causo ◽  
◽  
Etsuko Ueda ◽  
Kentaro Takemura ◽  
Yoshio Matsumoto ◽  
...  

Hand pose estimation using a multi-camera system allows natural non-contact interfacing unlike when using bulky data gloves. To enable any user to use the system regardless of gender or physical differences such as hand size, we propose hand model individualization using only multiple cameras. From the calibration motion, our method estimates the finger link lengths as well as the hand shape by minimizing the gap between the hand model and observation. We confirmed the feasibility of our proposal by comparing 1) actual and estimated link lengths and 2) hand pose estimation results using our calibrated hand model, a prior hand model and data obtained from data glove measurements.


IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 10533-10547
Author(s):  
Marek Hruz ◽  
Jakub Kanis ◽  
Zdenek Krnoul

IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 35824-35833
Author(s):  
Jae-Hun Song ◽  
Suk-Ju Kang

Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 1007
Author(s):  
Chi Xu ◽  
Yunkai Jiang ◽  
Jun Zhou ◽  
Yi Liu

Hand gesture recognition and hand pose estimation are two closely correlated tasks. In this paper, we propose a deep-learning based approach which jointly learns an intermediate level shared feature for these two tasks, so that the hand gesture recognition task can be benefited from the hand pose estimation task. In the training process, a semi-supervised training scheme is designed to solve the problem of lacking proper annotation. Our approach detects the foreground hand, recognizes the hand gesture, and estimates the corresponding 3D hand pose simultaneously. To evaluate the hand gesture recognition performance of the state-of-the-arts, we propose a challenging hand gesture recognition dataset collected in unconstrained environments. Experimental results show that, the gesture recognition accuracy of ours is significantly boosted by leveraging the knowledge learned from the hand pose estimation task.


Sign in / Sign up

Export Citation Format

Share Document