scholarly journals A Taxonomy in Robot-Assisted Training: Current Trends, Needs and Challenges

Technologies ◽  
2018 ◽  
Vol 6 (4) ◽  
pp. 119 ◽  
Author(s):  
Konstantinos Tsiakas ◽  
Maria Kyrarini ◽  
Vangelis Karkaletsis ◽  
Fillia Makedon ◽  
Oliver Korn

In this article, we present a taxonomy in Robot-Assisted Training; a growing body of research in Human–Robot Interaction which focuses on how robotic agents and devices can be used to enhance user’s performance during a cognitive or physical training task. Robot-Assisted Training systems have been successfully deployed to enhance the effects of a training session in various contexts, i.e., rehabilitation systems, educational environments, vocational settings, etc. The proposed taxonomy suggests a set of categories and parameters that can be used to characterize such systems, considering the current research trends and needs for the design, development and evaluation of Robot-Assisted Training systems. To this end, we review recent works and applications in Robot-Assisted Training systems, as well as related taxonomies in Human–Robot Interaction. The goal is to identify and discuss open challenges, highlighting the different aspects of a Robot-Assisted Training system, considering both robot perception and behavior control.

Author(s):  
Shiyang Dong ◽  
Takafumi Matsumaru

AbstractThis paper shows a novel walking training system for foot-eye coordination. To design customizable trajectories for different users conveniently in walking training, a new system which can track and record the actual walking trajectories by a tutor and can use these trajectories for the walking training by a trainee is developed. We set the four items as its human-robot interaction design concept: feedback, synchronization, ingenuity and adaptability. A foot model is proposed to define the position and direction of a foot. The errors in the detection method used in the system are less than 40 mm in position and 15 deg in direction. On this basis, three parts are structured to achieve the system functions: Trajectory Designer, Trajectory Viewer and Mobile Walking Trainer. According to the experimental results,we have confirmed the systemworks as intended and designed such that the steps recorded in Trajectory Designer could be used successfully as the footmarks projected in Mobile Walking Trainer and foot-eye coordination training would be conducted smoothly.


2007 ◽  
Vol 8 (3) ◽  
pp. 391-410 ◽  
Author(s):  
Justine Cassell ◽  
Andrea Tartaro

What is the hallmark of success in human–agent interaction? In animation and robotics, many have concentrated on the looks of the agent — whether the appearance is realistic or lifelike. We present an alternative benchmark that lies in the dyad and not the agent alone: Does the agent’s behavior evoke intersubjectivity from the user? That is, in both conscious and unconscious communication, do users react to behaviorally realistic agents in the same way they react to other humans? Do users appear to attribute similar thoughts and actions? We discuss why we distinguish between appearance and behavior, why we use the benchmark of intersubjectivity, our methodology for applying this benchmark to embodied conversational agents (ECAs), and why we believe this benchmark should be applied to human–robot interaction.


AI Magazine ◽  
2015 ◽  
Vol 36 (3) ◽  
pp. 107-112
Author(s):  
Adam B. Cohen ◽  
Sonia Chernova ◽  
James Giordano ◽  
Frank Guerin ◽  
Kris Hauser ◽  
...  

The AAAI 2014 Fall Symposium Series was held Thursday through Saturday, November 13–15, at the Westin Arlington Gateway in Arlington, Virginia adjacent to Washington, DC. The titles of the seven symposia were Artificial Intelligence for Human-Robot Interaction, Energy Market Prediction, Expanding the Boundaries of Health Informatics Using AI, Knowledge, Skill, and Behavior Transfer in Autonomous Robots, Modeling Changing Perspectives: Reconceptualizing Sensorimotor Experiences, Natural Language Access to Big Data, and The Nature of Humans and Machines: A Multidisciplinary Discourse. The highlights of each symposium are presented in this report.


Sensors ◽  
2020 ◽  
Vol 20 (1) ◽  
pp. 296 ◽  
Author(s):  
Caroline P. C. Chanel ◽  
Raphaëlle N. Roy ◽  
Frédéric Dehais ◽  
Nicolas Drougard

The design of human–robot interactions is a key challenge to optimize operational performance. A promising approach is to consider mixed-initiative interactions in which the tasks and authority of each human and artificial agents are dynamically defined according to their current abilities. An important issue for the implementation of mixed-initiative systems is to monitor human performance to dynamically drive task allocation between human and artificial agents (i.e., robots). We, therefore, designed an experimental scenario involving missions whereby participants had to cooperate with a robot to fight fires while facing hazards. Two levels of robot automation (manual vs. autonomous) were randomly manipulated to assess their impact on the participants’ performance across missions. Cardiac activity, eye-tracking, and participants’ actions on the user interface were collected. The participants performed differently to an extent that we could identify high and low score mission groups that also exhibited different behavioral, cardiac and ocular patterns. More specifically, our findings indicated that the higher level of automation could be beneficial to low-scoring participants but detrimental to high-scoring ones, and vice versa. In addition, inter-subject single-trial classification results showed that the studied behavioral and physiological features were relevant to predict mission performance. The highest average balanced accuracy (74%) was reached using the features extracted from all input devices. These results suggest that an adaptive HRI driving system, that would aim at maximizing performance, would be capable of analyzing such physiological and behavior markers online to further change the level of automation when it is relevant for the mission purpose.


2021 ◽  
Vol 8 ◽  
Author(s):  
Sebastian Zörner ◽  
Emy Arts ◽  
Brenda Vasiljevic ◽  
Ankit Srivastava ◽  
Florian Schmalzl ◽  
...  

As robots become more advanced and capable, developing trust is an important factor of human-robot interaction and cooperation. However, as multiple environmental and social factors can influence trust, it is important to develop more elaborate scenarios and methods to measure human-robot trust. A widely used measurement of trust in social science is the investment game. In this study, we propose a scaled-up, immersive, science fiction Human-Robot Interaction (HRI) scenario for intrinsic motivation on human-robot collaboration, built upon the investment game and aimed at adapting the investment game for human-robot trust. For this purpose, we utilize two Neuro-Inspired COmpanion (NICO) - robots and a projected scenery. We investigate the applicability of our space mission experiment design to measure trust and the impact of non-verbal communication. We observe a correlation of 0.43 (p=0.02) between self-assessed trust and trust measured from the game, and a positive impact of non-verbal communication on trust (p=0.0008) and robot perception for anthropomorphism (p=0.007) and animacy (p=0.00002). We conclude that our scenario is an appropriate method to measure trust in human-robot interaction and also to study how non-verbal communication influences a human’s trust in robots.


2019 ◽  
Vol 16 (06) ◽  
pp. 1950028
Author(s):  
Stefano Borgo ◽  
Enrico Blanzieri

Robots might not act according to human expectations if they cannot anticipate how people make sense of a situation and what behavior they consider appropriate in some given circumstances. In many cases, understanding, expectations and behavior are constrained, if not driven, by culture, and a robot that knows about human culture could improve the quality level of human–robot interaction. Can we share human culture with a robot? Can we provide robots with formal representations of different cultures? In this paper, we discuss the (elusive) notion of culture and propose an approach based on the notion of trait which, we argue, permits us to build formal modules suitable to represent culture (broadly understood) in a robot architecture. We distinguish the types of traits that such modules should contain, namely behavior, knowledge, rule and interpretation traits, and how they could be organized. We identify the interpretation process that maps situations to specific knowledge traits, called scenarios, as a key component of the trait-based culture module. Finally, we describe how culture modules can be integrated in an existing architecture, and discuss three use cases to exemplify the advantages of having a culture module in the robot architecture highlighting surprising potentialities.


Sign in / Sign up

Export Citation Format

Share Document