scholarly journals Modeling and Analysis of Physical Human-Robot Interaction of an Upper Body Exoskeleton in Assistive Applications

2021 ◽  
Vol 42 (4) ◽  
pp. 159-172
Author(s):  
Simon Christensen ◽  
Xuerong Li ◽  
Shaoping Bai
2013 ◽  
Vol 10 (02) ◽  
pp. 1350017 ◽  
Author(s):  
HO SEOK AHN ◽  
DONG-WOOK LEE ◽  
DONGWOON CHOI ◽  
DUK-YEON LEE ◽  
HO-GIL LEE ◽  
...  

Human-like appearance and movement of social robots is important in human–robot interaction. This paper presents the hardware mechanism and software architecture of an incarnate announcing robot system called EveR-1. EveR-1 is a robot platform to implement and test emotional expressions and human–robot interactions. EveR-1 is not bipedal but sits on a chair and communicates information by moving its upper body. The skin of the head and upper body is made of silicon jelly to give a human-like texture. To express human-like emotion, it uses body gestures as well as facial expressions decided by a personality model. EveR-1 performs the role of guidance service in an exhibition and does the oral narration of fairy tales and simple conversation with humans.


2020 ◽  
pp. 1-15
Author(s):  
Qiaolian Xie ◽  
Qiaoling Meng ◽  
Yue Dai ◽  
Qingxin Zeng ◽  
Yuanjie Fan ◽  
...  

BACKGROUND: Upper limb rehabilitation robots have become an important piece of equipment in stroke rehabilitation. Human-robot coupling (HRC) dynamics play a key role in the control of rehabilitation robots to improve human-robot interaction. OBJECTIVE: This study aims to study the methods of modeling and analysis of HRC dynamics to realize more accurate dynamic control of upper limb rehabilitation robots. METHODS: By the analysis of force interaction between the human arm and the upper limb rehabilitation robot, the HRC torque is achieved by summing up the robot torque and the human arm torque. The HRC torque and robot torque of a 2-DOF upper limb rehabilitation robot (FLEXO-Arm) are solved by Lagrangian equation and step-by-step dynamic parameters identification method. RESULTS: The root mean square (RMS) is used to evaluate the accuracy of the HRC torque and the robot torque calculated by the parameter identification, and the error of both is about 10%. Moreover, the HRC torque and the robot torque are compared with the actual torque measured by torque sensors. The error of the robot torque is more than twice the HRC. Therefore, the HRC torque is more accurate than the actual torque. CONCLUSIONS: The proposed HRC dynamics effectively achieves more accurate dynamic control of upper limb rehabilitation robots.


Author(s):  
Eiichi Yoshida

This article provides a brief overview of the technology of humanoid robots. First, historical development and hardware progress are presented mainly on human-size full-body biped humanoid robots, together with progress in pattern generation of biped locomotion. Then, «whole-body motion» – coordinating leg and arm movements to fully leverage humanoids’ high degrees of freedom – is presented, followed by its applications in fields such as device evaluation and large-scale assembly. Upper-body humanoids with a mobile base, which are mainly utilized for research on human-robot interaction and cognitive robotics, are also introduced before addressing current issues and perspectives.


2011 ◽  
Vol 08 (01) ◽  
pp. 127-146 ◽  
Author(s):  
ILARIA RENNA ◽  
RYAD CHELLALI ◽  
CATHERINE ACHARD

This article presents an algorithm for 3D upper body tracking. This algorithm is a combination of two well-known methods: annealing particle filter and belief propagation. It is worth to underline that the 3D body tracking presents a challenging problem because of the high dimensionality of state space and so because of the huge computational time. In this work, we show that with our algorithm, it is possible to tackle this problem. Experiments both on real and synthetic human gesture sequences demonstrate that this combined approach leads to reliable results, as it reduces computational time without loosing robustness.


2019 ◽  
Vol 2019 ◽  
pp. 1-12
Author(s):  
José Carlos Castillo ◽  
Fernando Alonso-Martín ◽  
David Cáceres-Domínguez ◽  
María Malfaz ◽  
Miguel A. Salichs

Human communication relies on several aspects beyond the speech. One of them is gestures as they express intentions, interests, feelings, or ideas and complement the speech. Social robots need to interpret these messages to allow a more natural Human-Robot Interaction. In this sense, our aim is to study the effect of position and speed features in dynamic gesture recognition. We use 3D information to extract the user’s skeleton and calculate the normalized position for all of its joints, and using the temporal variation of such positions, we calculate their speeds. Our three datasets are composed of 1355 samples from 30 users. We consider 14 common gestures in HRI involving upper body movements. A set of classification techniques is evaluated testing these three datasets to find what features perform better. Results indicate that the union of both speed and position achieves the best results among the three possibilities, 0.999 of F-score. The combination that performs better to detect dynamic gestures in real time is finally integrated in our social robot with a simple HRI application to run a proof of concept test to check how the proposal behaves in a realistic scenario.


2019 ◽  
Vol 9 (3) ◽  
pp. 361
Author(s):  
Antonio López ◽  
Juan Alvarez ◽  
Diego Álvarez

Prediction of walking turns allows to improve human factors such as comfort and perceived safety in human-robot interaction. The current state-of-the-art suggests that upper body kinematics can be used for that purpose and contains evidence about the reliability and the quantitative anticipation that can be expected from different variables. However, the experimental methodology has not been consistent throughout the different works and the related data has not always been given in an explicit form, with different studies containing partial, complementary or even contradictory results. In this paper, with the purpose of providing a uniform view of the topic that can trigger new developments in the field, we performed a systematic review of the relevant literature addressing three main questions: (i) Which upper body kinematic variables permit to anticipate a walking turn? (ii) How long in advance can we anticipate the turn from them? (iii) What is the expected contribution of walking turn prediction systems from upper body kinematics for human-robot interaction? We have found that head yaw was the most reliable kinematical variable from the upper body to predict walking turns about 200ms. Trunk roll anticipates walking turns by a similar amount of time, but with less reliability. Both approaches may benefit human-robot interaction in close proximity, helping the robot to exhibit appropriate proxemic behavior interacting at intimate, personal or social distances. From the point of view of safety, they have to be considered with caution. Trunk yaw is not valid to anticipate turns. Gaze Yaw seems to be the earliest predictor, although existing evidence is still inconclusive.


Author(s):  
Luis A. Fuente ◽  
Hannah Ierardi ◽  
Michael Pilling ◽  
Nigel T. Crook

Sign in / Sign up

Export Citation Format

Share Document