Whole-Body Human-to-Humanoid Motion Imitation

2013 ◽  
Vol 479-480 ◽  
pp. 617-621
Author(s):  
Hsien I Lin ◽  
Zan Sheng Chen

Human-to-Humanoid motion imitation is an intuitive method to teach a humanoid robot how to act by human demonstration. For example, teaching a robot how to stand is simply showing the robot how a human stands. Much of previous work in motion imitation focuses on either upper-body or lower-body motion imitation. In this paper, we propose a novel approach to imitate human whole-body motion by a humanoid robot. The main problem of the proposed work is how to control robot balance and keep the robot motion as similar as taught human motion simultaneously. Thus, we propose a balance criterion to assess how well the root can balance and use the criterion and a genetic algorithm to search a sub-optimal solution, making the root balanced and its motion similar to human motion. We have validated the proposed work on an Aldebaran Robotics NAO robot with 25 degrees of freedom. The experimental results show that the root can imitate human postures and autonomously keep itself balanced.

Author(s):  
Eiichi Yoshida

This article provides a brief overview of the technology of humanoid robots. First, historical development and hardware progress are presented mainly on human-size full-body biped humanoid robots, together with progress in pattern generation of biped locomotion. Then, «whole-body motion» – coordinating leg and arm movements to fully leverage humanoids’ high degrees of freedom – is presented, followed by its applications in fields such as device evaluation and large-scale assembly. Upper-body humanoids with a mobile base, which are mainly utilized for research on human-robot interaction and cognitive robotics, are also introduced before addressing current issues and perspectives.


Robotica ◽  
2004 ◽  
Vol 22 (5) ◽  
pp. 577-586 ◽  
Author(s):  
Hun-ok Lim ◽  
Akinori Ishii ◽  
Atsuo Takanishi

This paper describes emotion-based walking for a biped humanoid robot. In this paper, three emotions, such as happiness, sadness and anger are considered. These emotions are expressed by the walking styles of the biped humanoid robot that are preset by the parameterization of its whole body motion. To keep its balance during the emotional expressions, the motion of the trunk is employed which is calculated by the compensatory motion control based on the motions of the head, arms and legs. We have constructed a biped humanoid robot, WABIAB-RII (WAseda BIpedal humANoid robot-Revised II), to explore the issue of the emotional walking motion for a smooth and natural communication. WABIAN-RII has forty-three mechanical degrees of freedom and four passive degrees of freedom. Its height is about 1.84 m and its total weight is 127 kg. Using WABIAN-RII, three emotion expressions are experimented by the biped walking, including the body motion, and evaluated.


2019 ◽  
Vol 9 (4) ◽  
pp. 752 ◽  
Author(s):  
Junhua Gu ◽  
Chuanxin Lan ◽  
Wenbai Chen ◽  
Hu Han

While remarkable progress has been made to pedestrian detection in recent years, robust pedestrian detection in the wild e.g., under surveillance scenarios with occlusions, remains a challenging problem. In this paper, we present a novel approach for joint pedestrian and body part detection via semantic relationship learning under unconstrained scenarios. Specifically, we propose a Body Part Indexed Feature (BPIF) representation to encode the semantic relationship between individual body parts (i.e., head, head-shoulder, upper body, and whole body) and highlight per body part features, providing robustness against partial occlusions to the whole body. We also propose an Adaptive Joint Non-Maximum Suppression (AJ-NMS) to replace the original NMS algorithm widely used in object detection, leading to higher precision and recall for detecting overlapped pedestrians. Experimental results on the public-domain CUHK-SYSU Person Search Dataset show that the proposed approach outperforms the state-of-the-art methods for joint pedestrian and body part detection in the wild.


Author(s):  
Kondalarao Bhavanibhatla ◽  
Sulthan Suresh-Fazeela ◽  
Dilip Kumar Pratihar

Abstract In this paper, a novel algorithm is presented to achieve the coordinated motion planning of a Legged Mobile Manipulator (LMM) for tracking the given end-effector’s trajectory. LMM robotic system can be obtained by mounting a manipulator on the top of a multi-legged platform for achieving the capabilities of both manipulation and mobility. To exploit the advantages of these capabilities, the manipulator should be able to accomplish the task, while the hexapod platform moves simultaneously. In the presented approach, the whole-body motion planning is achieved in two steps. In the first step, the robotic system is assumed to be a mobile manipulator, in which the manipulator has two additional translational degrees of freedom at the base. The redundancy of this robotic system is solved by treating it as an optimization problem. Then, in the second step, the omnidirectional motion of the legged platform is achieved with a combination of straight forward and crab motions. The proposed algorithm is tested through a numerical simulation in MATLAB and then, validated on a virtual model of the robot using multibody dynamic simulation software, MSC ADAMS. Multiple trajectories of the end-effector have been tested and the results show that the proposed algorithm accomplishes the given task successfully by providing a singularity-free whole-body motion.


Robotics ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 33
Author(s):  
Elisa Digo ◽  
Mattia Antonelli ◽  
Valerio Cornagliotto ◽  
Stefano Pastorelli ◽  
Laura Gastaldi

(1) Background: The technologies of Industry 4.0 are increasingly promoting an operation of human motion prediction for improvement of the collaboration between workers and robots. The purposes of this study were to fuse the spatial and inertial data of human upper limbs for typical industrial pick and place movements and to analyze the collected features from the future perspective of collaborative robotic applications and human motion prediction algorithms. (2) Methods: Inertial Measurement Units and a stereophotogrammetric system were adopted to track the upper body motion of 10 healthy young subjects performing pick and place operations at three different heights. From the obtained database, 10 features were selected and used to distinguish among pick and place gestures at different heights. Classification performances were evaluated by estimating confusion matrices and F1-scores. (3) Results: Values on matrices diagonals were definitely greater than those in other positions. Furthermore, F1-scores were very high in most cases. (4) Conclusions: Upper arm longitudinal acceleration and markers coordinates of wrists and elbows could be considered representative features of pick and place gestures at different heights, and they are consequently suitable for the definition of a human motion prediction algorithm to be adopted in effective collaborative robotics industrial applications.


Sign in / Sign up

Export Citation Format

Share Document