User-Adaptable Hand Pose Estimation Technique for Human-Robot Interaction

2009 ◽  
Vol 21 (6) ◽  
pp. 739-748 ◽  
Author(s):  
Albert Causo ◽  
◽  
Etsuko Ueda ◽  
Kentaro Takemura ◽  
Yoshio Matsumoto ◽  
...  

Hand pose estimation using a multi-camera system allows natural non-contact interfacing unlike when using bulky data gloves. To enable any user to use the system regardless of gender or physical differences such as hand size, we propose hand model individualization using only multiple cameras. From the calibration motion, our method estimates the finger link lengths as well as the hand shape by minimizing the gap between the hand model and observation. We confirmed the feasibility of our proposal by comparing 1) actual and estimated link lengths and 2) hand pose estimation results using our calibrated hand model, a prior hand model and data obtained from data glove measurements.

2008 ◽  
Vol 2008 (0) ◽  
pp. _2P2-E16_1-_2P2-E16_2
Author(s):  
Mai MATSUO ◽  
Etsuko UEDA ◽  
Yoshio MATSUMOTO ◽  
Tsukasa OGASAWARA

Author(s):  
Chen Zhongshan ◽  
Feng Xinning ◽  
Oscar Sanjuán Martínez ◽  
Rubén González Crespo

In human-computer interaction and virtual truth, hand pose estimation is essential. Public dataset experimental analysis Different biometric shows that a particular system creates low manual estimation errors and has a more significant opportunity for new hand pose estimation activity. Due to the fluctuations, self-occlusion, and specific modulations, the structure of hand photographs is quite tricky. Hence, this paper proposes a Hybrid approach based on machine learning (HABoML) to enhance the current competitiveness, performance experience, experimental hand shape, and key point estimation analysis. In terms of strengthening the ability to make better self-occlusion adjustments and special handshake and poses estimations, the machine learning algorithm is combined with a hybrid approach. The experiment results helped define a set of follow-up experiments for the proposed systems in this field, which had a high efficiency and performance level. The HABoML strategy decreased analysis precision by 9.33% and is a better solution.


2021 ◽  
Author(s):  
Digang Sun ◽  
Ping Zhang ◽  
Mingxuan Chen ◽  
Jiaxin Chen

With an increasing number of robots are employed in manufacturing, a human-robot interaction method that can teach robots in a natural, accurate, and rapid manner is needed. In this paper, we propose a novel human-robot interface based on the combination of static hand gestures and hand poses. In our proposed interface, the pointing direction of the index finger and the orientation of the whole hand are extracted to indicate the moving direction and orientation of the robot in a fast-teaching mode. A set of hand gestures are designed according to their usage in humans' daily life and recognized to control the position and orientation of the robot in a fine-teaching mode. We employ the feature extraction ability of the hand pose estimation network via transfer learning and utilize attention mechanisms to improve the performance of the hand gesture recognition network. The inputs of hand pose estimation and hand gesture recognition networks are monocular RGB images, making our method independent of depth information input and applicable to more scenarios. In the regular shape reconstruction experiments on the UR3 robot, the mean error of the reconstructed shape is less than 1 mm, which demonstrates the effectiveness and efficiency of our method.


2021 ◽  
Author(s):  
Digang Sun ◽  
Ping Zhang ◽  
Mingxuan Chen ◽  
Jiaxin Chen

With an increasing number of robots are employed in manufacturing, a human-robot interaction method that can teach robots in a natural, accurate, and rapid manner is needed. In this paper, we propose a novel human-robot interface based on the combination of static hand gestures and hand poses. In our proposed interface, the pointing direction of the index finger and the orientation of the whole hand are extracted to indicate the moving direction and orientation of the robot in a fast-teaching mode. A set of hand gestures are designed according to their usage in humans' daily life and recognized to control the position and orientation of the robot in a fine-teaching mode. We employ the feature extraction ability of the hand pose estimation network via transfer learning and utilize attention mechanisms to improve the performance of the hand gesture recognition network. The inputs of hand pose estimation and hand gesture recognition networks are monocular RGB images, making our method independent of depth information input and applicable to more scenarios. In the regular shape reconstruction experiments on the UR3 robot, the mean error of the reconstructed shape is less than 1 mm, which demonstrates the effectiveness and efficiency of our method.


Author(s):  
Albert Causo ◽  
Mai Matsuo ◽  
Etsuko Ueda ◽  
Kentaro Takemura ◽  
Yoshio Matsumoto ◽  
...  

2009 ◽  
Vol 21 (6) ◽  
pp. 749-757 ◽  
Author(s):  
Kiyoshi Hoshino ◽  
◽  
Motomasa Tomida

The three-dimensional hand pose estimation this paper proposes uses a single camera to search a large database for the hand image most similar to the data input. It starts with coarse screening of proportional information on hand images roughly corresponding to forearm or hand rotation, or thumb or finger bending. Next, a detailed search is made for similarity among selected candidates. No separate processes were used to estimate corresponding joint angles when describing wrist’s rotation, flexion/extension, and abduction/adduction motions. By estimating sequential hand images this way, we estimated joint angle estimation error within several degrees - even when the wrist was freely rotating - within 80 fps using only a Notebook PC and high-speed camera, regardless of hand size and shape.


Sign in / Sign up

Export Citation Format

Share Document