Design of a soft upper body robot for physical human-robot interaction

Author(s):  
Alexander Alspach ◽  
Joohyung Kim ◽  
Katsu Yamane
2013 ◽  
Vol 10 (02) ◽  
pp. 1350017 ◽  
Author(s):  
HO SEOK AHN ◽  
DONG-WOOK LEE ◽  
DONGWOON CHOI ◽  
DUK-YEON LEE ◽  
HO-GIL LEE ◽  
...  

Human-like appearance and movement of social robots is important in human–robot interaction. This paper presents the hardware mechanism and software architecture of an incarnate announcing robot system called EveR-1. EveR-1 is a robot platform to implement and test emotional expressions and human–robot interactions. EveR-1 is not bipedal but sits on a chair and communicates information by moving its upper body. The skin of the head and upper body is made of silicon jelly to give a human-like texture. To express human-like emotion, it uses body gestures as well as facial expressions decided by a personality model. EveR-1 performs the role of guidance service in an exhibition and does the oral narration of fairy tales and simple conversation with humans.


Author(s):  
Eiichi Yoshida

This article provides a brief overview of the technology of humanoid robots. First, historical development and hardware progress are presented mainly on human-size full-body biped humanoid robots, together with progress in pattern generation of biped locomotion. Then, «whole-body motion» – coordinating leg and arm movements to fully leverage humanoids’ high degrees of freedom – is presented, followed by its applications in fields such as device evaluation and large-scale assembly. Upper-body humanoids with a mobile base, which are mainly utilized for research on human-robot interaction and cognitive robotics, are also introduced before addressing current issues and perspectives.


2011 ◽  
Vol 08 (01) ◽  
pp. 127-146 ◽  
Author(s):  
ILARIA RENNA ◽  
RYAD CHELLALI ◽  
CATHERINE ACHARD

This article presents an algorithm for 3D upper body tracking. This algorithm is a combination of two well-known methods: annealing particle filter and belief propagation. It is worth to underline that the 3D body tracking presents a challenging problem because of the high dimensionality of state space and so because of the huge computational time. In this work, we show that with our algorithm, it is possible to tackle this problem. Experiments both on real and synthetic human gesture sequences demonstrate that this combined approach leads to reliable results, as it reduces computational time without loosing robustness.


2019 ◽  
Vol 2019 ◽  
pp. 1-12
Author(s):  
José Carlos Castillo ◽  
Fernando Alonso-Martín ◽  
David Cáceres-Domínguez ◽  
María Malfaz ◽  
Miguel A. Salichs

Human communication relies on several aspects beyond the speech. One of them is gestures as they express intentions, interests, feelings, or ideas and complement the speech. Social robots need to interpret these messages to allow a more natural Human-Robot Interaction. In this sense, our aim is to study the effect of position and speed features in dynamic gesture recognition. We use 3D information to extract the user’s skeleton and calculate the normalized position for all of its joints, and using the temporal variation of such positions, we calculate their speeds. Our three datasets are composed of 1355 samples from 30 users. We consider 14 common gestures in HRI involving upper body movements. A set of classification techniques is evaluated testing these three datasets to find what features perform better. Results indicate that the union of both speed and position achieves the best results among the three possibilities, 0.999 of F-score. The combination that performs better to detect dynamic gestures in real time is finally integrated in our social robot with a simple HRI application to run a proof of concept test to check how the proposal behaves in a realistic scenario.


2019 ◽  
Vol 9 (3) ◽  
pp. 361
Author(s):  
Antonio López ◽  
Juan Alvarez ◽  
Diego Álvarez

Prediction of walking turns allows to improve human factors such as comfort and perceived safety in human-robot interaction. The current state-of-the-art suggests that upper body kinematics can be used for that purpose and contains evidence about the reliability and the quantitative anticipation that can be expected from different variables. However, the experimental methodology has not been consistent throughout the different works and the related data has not always been given in an explicit form, with different studies containing partial, complementary or even contradictory results. In this paper, with the purpose of providing a uniform view of the topic that can trigger new developments in the field, we performed a systematic review of the relevant literature addressing three main questions: (i) Which upper body kinematic variables permit to anticipate a walking turn? (ii) How long in advance can we anticipate the turn from them? (iii) What is the expected contribution of walking turn prediction systems from upper body kinematics for human-robot interaction? We have found that head yaw was the most reliable kinematical variable from the upper body to predict walking turns about 200ms. Trunk roll anticipates walking turns by a similar amount of time, but with less reliability. Both approaches may benefit human-robot interaction in close proximity, helping the robot to exhibit appropriate proxemic behavior interacting at intimate, personal or social distances. From the point of view of safety, they have to be considered with caution. Trunk yaw is not valid to anticipate turns. Gaze Yaw seems to be the earliest predictor, although existing evidence is still inconclusive.


Author(s):  
Luis A. Fuente ◽  
Hannah Ierardi ◽  
Michael Pilling ◽  
Nigel T. Crook

2014 ◽  
Vol 23 (2) ◽  
pp. 133-154 ◽  
Author(s):  
Yang Xiao ◽  
Zhijun Zhang ◽  
Aryel Beck ◽  
Junsong Yuan ◽  
Daniel Thalmann

In this paper, a human–robot interaction system based on a novel combination of sensors is proposed. It allows one person to interact with a humanoid social robot using natural body language. The robot understands the meaning of human upper body gestures and expresses itself by using a combination of body movements, facial expressions, and verbal language. A set of 12 upper body gestures is involved for communication. This set also includes gestures with human–object interactions. The gestures are characterized by head, arm, and hand posture information. The wearable Immersion CyberGlove II is employed to capture the hand posture. This information is combined with the head and arm posture captured from Microsoft Kinect. This is a new sensor solution for human-gesture capture. Based on the posture data from the CyberGlove II and Kinect, an effective and real-time human gesture recognition method is proposed. The gesture understanding approach based on an innovative combination of sensors is the main contribution of this paper. To verify the effectiveness of the proposed gesture recognition method, a human body gesture data set is built. The experimental results demonstrate that our approach can recognize the upper body gestures with high accuracy in real time. In addition, for robot motion generation and control, a novel online motion planning method is proposed. In order to generate appropriate dynamic motion, a quadratic programming (QP)-based dual-arms kinematic motion generation scheme is proposed, and a simplified recurrent neural network is employed to solve the QP problem. The integration of a handshake within the HRI system illustrates the effectiveness of the proposed online generation method.


Sign in / Sign up

Export Citation Format

Share Document