scholarly journals Walking Turn Prediction from Upper Body Kinematics: A Systematic Review with Implications for Human-Robot Interaction

2019 ◽  
Vol 9 (3) ◽  
pp. 361
Author(s):  
Antonio López ◽  
Juan Alvarez ◽  
Diego Álvarez

Prediction of walking turns allows to improve human factors such as comfort and perceived safety in human-robot interaction. The current state-of-the-art suggests that upper body kinematics can be used for that purpose and contains evidence about the reliability and the quantitative anticipation that can be expected from different variables. However, the experimental methodology has not been consistent throughout the different works and the related data has not always been given in an explicit form, with different studies containing partial, complementary or even contradictory results. In this paper, with the purpose of providing a uniform view of the topic that can trigger new developments in the field, we performed a systematic review of the relevant literature addressing three main questions: (i) Which upper body kinematic variables permit to anticipate a walking turn? (ii) How long in advance can we anticipate the turn from them? (iii) What is the expected contribution of walking turn prediction systems from upper body kinematics for human-robot interaction? We have found that head yaw was the most reliable kinematical variable from the upper body to predict walking turns about 200ms. Trunk roll anticipates walking turns by a similar amount of time, but with less reliability. Both approaches may benefit human-robot interaction in close proximity, helping the robot to exhibit appropriate proxemic behavior interacting at intimate, personal or social distances. From the point of view of safety, they have to be considered with caution. Trunk yaw is not valid to anticipate turns. Gaze Yaw seems to be the earliest predictor, although existing evidence is still inconclusive.

2021 ◽  
Vol 8 ◽  
Author(s):  
Connor Esterwood ◽  
Lionel P. Robert

Robots have become vital to the delivery of health care and their personalities are often important to understanding their effectiveness as health care providers. Despite this, there is a lack of a systematic overarching understanding of personality in health care human-robot interaction. This makes it difficult to understand what we know and do not know about the impact of personality in health care human-robot interaction (H-HRI). As a result, our understanding of personality in H-HRI has not kept pace with the deployment of robots in various health care environments. To address this, the authors conducted a literature review that identified 18 studies on personality in H-HRI. This paper expands, refines, and further explicates the systematic review done in a conference proceedings [see: Esterwood (Proceedings of the 8th International Conference on Human-Agent Interaction, 2020, 87–95)]. Review results: 1) highlight major thematic research areas, 2) derive and present major conclusions from the literature, 3) identify gaps in the literature, and 4) offer guidance for future H-HRI researchers. Overall, this paper represents a reflection on the existing literature and provides an important starting point for future research on personality in H-HRI.


2020 ◽  
Vol 7 ◽  
Author(s):  
Matteo Spezialetti ◽  
Giuseppe Placidi ◽  
Silvia Rossi

A fascinating challenge in the field of human–robot interaction is the possibility to endow robots with emotional intelligence in order to make the interaction more intuitive, genuine, and natural. To achieve this, a critical point is the capability of the robot to infer and interpret human emotions. Emotion recognition has been widely explored in the broader fields of human–machine interaction and affective computing. Here, we report recent advances in emotion recognition, with particular regard to the human–robot interaction context. Our aim is to review the state of the art of currently adopted emotional models, interaction modalities, and classification strategies and offer our point of view on future developments and critical issues. We focus on facial expressions, body poses and kinematics, voice, brain activity, and peripheral physiological responses, also providing a list of available datasets containing data from these modalities.


2019 ◽  
Vol 75 (1) ◽  
pp. 279-282 ◽  
Author(s):  
Xi Yu Leung

Purpose Information technologies are changing the service paradigm. This paper aims to provide a concise review on technology-enabled service evolution in tourism. Design/methodology/approach The paper is based on a review of relevant literature. Findings The past evolution of service delivery is summarized in three stages: service, e-service and m-service. The fourth stage of service evolution is predicted to be “a-service” with three features: service automation and human–robot interaction, artificial intelligence and big data and smart travel experience. Originality/value This paper provides a brief overview of service evolution under the impact of technology. It originally identifies the four stages of service evolution in tourism.


Author(s):  
Stefanie Tellex ◽  
Nakul Gopalan ◽  
Hadas Kress-Gazit ◽  
Cynthia Matuszek

This article surveys the use of natural language in robotics from a robotics point of view. To use human language, robots must map words to aspects of the physical world, mediated by the robot's sensors and actuators. This problem differs from other natural language processing domains due to the need to ground the language to noisy percepts and physical actions. Here, we describe central aspects of language use by robots, including understanding natural language requests, using language to drive learning about the physical world, and engaging in collaborative dialogue with a human partner. We describe common approaches, roughly divided into learning methods, logic-based methods, and methods that focus on questions of human–robot interaction. Finally, we describe several application domains for language-using robots.


2013 ◽  
Vol 10 (02) ◽  
pp. 1350017 ◽  
Author(s):  
HO SEOK AHN ◽  
DONG-WOOK LEE ◽  
DONGWOON CHOI ◽  
DUK-YEON LEE ◽  
HO-GIL LEE ◽  
...  

Human-like appearance and movement of social robots is important in human–robot interaction. This paper presents the hardware mechanism and software architecture of an incarnate announcing robot system called EveR-1. EveR-1 is a robot platform to implement and test emotional expressions and human–robot interactions. EveR-1 is not bipedal but sits on a chair and communicates information by moving its upper body. The skin of the head and upper body is made of silicon jelly to give a human-like texture. To express human-like emotion, it uses body gestures as well as facial expressions decided by a personality model. EveR-1 performs the role of guidance service in an exhibition and does the oral narration of fairy tales and simple conversation with humans.


Author(s):  
Sarra Jlassi ◽  
Sami Tliba ◽  
Yacine Chitour

Purpose – The problem of robotic co-manipulation is often addressed using impedance control based methods where the authors seek to establish a mathematical relation between the velocity of the human-robot interaction point and the force applied by the human operator (HO) at this point. This paper aims to address the problem of co-manipulation for handling tasks seen as a constrained optimal control problem. Design/methodology/approach – The proposed point of view relies on the implementation of a specific online trajectory generator (OTG) associated with a kinematic feedback loop. This OTG is designed so as to translate the HO intentions to ideal trajectories that the robot must follow. It works as an automaton with two states of motion whose transitions are controlled by comparing the magnitude of the force to an adjustable threshold, in order to enable the operator to keep authority over the robot's states of motion. Findings – To ensure the smoothness of the interaction, the authors propose to generate a velocity profile collinear to the force applied at the interaction point. The feedback control loop is then used to satisfy the requirements of stability and of trajectory tracking to guarantee assistance and operator security. The overall strategy is applied to the penducobot problem. Originality/value – The approach stands out for the nature of the problem to be tackled (heavy load handling tasks) and for its vision on the co-manipulation. It is based on the implementation of two main ingredients. The first one lies in the online generation of an appropriate trajectory of the interaction point located at the end-effector and describing the HO intention. The other consists in the design of a control structure allowing a good tracking of the generated trajectory.


2020 ◽  
Vol 1 ◽  
pp. 133-158 ◽  
Author(s):  
Honson Ling ◽  
Elin Björling

With the prevalence of mental health problems today, designing human-robot interaction for mental health intervention is not only possible, but critical. The current experiment examined how three types of robot disclosure (emotional, technical, and by-proxy) affect robot perception and human disclosure behavior during a stress-sharing activity. Emotional robot disclosure resulted in the lowest robot perceived safety. Post-hoc analysis revealed that increased perceived stress predicted reduced human disclosure, user satisfaction, robot likability, and future robot use. Negative attitudes toward robots also predicted reduced intention for future robot use. This work informs on the possible design of robot disclosure, as well as how individual attributes, such as perceived stress, can impact human robot interaction in a mental health context.


Sign in / Sign up

Export Citation Format

Share Document