scholarly journals The impact of adding perspective-taking to spatial referencing during human–robot interaction

2020 ◽  
Vol 134 ◽  
pp. 103654
Author(s):  
Fethiye Irmak Doğan ◽  
Sarah Gillet ◽  
Elizabeth J. Carter ◽  
Iolanda Leite
Robotica ◽  
2010 ◽  
Vol 29 (3) ◽  
pp. 421-432 ◽  
Author(s):  
R. E. Mohan ◽  
W. S. Wijesoma ◽  
C. A. A. Calderon ◽  
C. J. Zhou

SUMMARYEstimating robot performance in human robot teams is a vital problem in human robot interaction community. In a previous work, we presented extended neglect tolerance model for estimation of robot performance, where the human operator switches control between robots sequentially based on acceptable performance levels, taking into account any false alarms in human robot interactions. Task complexity is a key parameter that directly impacts the robot performance as well as the false alarms occurrences. In this paper, we validate the extended neglect tolerance model for two robot tasks of varying complexity levels. We also present the impact of task complexity on robot performance estimations and false alarms demands. Experiments were performed with real and virtual humanoid soccer robots across tele-operated and semi-autonomous modes of autonomy. Measured false alarm demand and robot performances were largely consistent with the extended neglect tolerance model predictions for both real and virtual robot experiments. Experiments also showed that the task complexity is directly proportional to false alarm demands and inversely proportional to robot performance.


2021 ◽  
Vol 8 ◽  
Author(s):  
Sebastian Zörner ◽  
Emy Arts ◽  
Brenda Vasiljevic ◽  
Ankit Srivastava ◽  
Florian Schmalzl ◽  
...  

As robots become more advanced and capable, developing trust is an important factor of human-robot interaction and cooperation. However, as multiple environmental and social factors can influence trust, it is important to develop more elaborate scenarios and methods to measure human-robot trust. A widely used measurement of trust in social science is the investment game. In this study, we propose a scaled-up, immersive, science fiction Human-Robot Interaction (HRI) scenario for intrinsic motivation on human-robot collaboration, built upon the investment game and aimed at adapting the investment game for human-robot trust. For this purpose, we utilize two Neuro-Inspired COmpanion (NICO) - robots and a projected scenery. We investigate the applicability of our space mission experiment design to measure trust and the impact of non-verbal communication. We observe a correlation of 0.43 (p=0.02) between self-assessed trust and trust measured from the game, and a positive impact of non-verbal communication on trust (p=0.0008) and robot perception for anthropomorphism (p=0.007) and animacy (p=0.00002). We conclude that our scenario is an appropriate method to measure trust in human-robot interaction and also to study how non-verbal communication influences a human’s trust in robots.


2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Davide De Tommaso ◽  
Ebru Baykara ◽  
Agnieszka Wykowska

Robots will soon enter social environments shared with humans. We need robots that are able to efficiently convey social signals during interactions. At the same time, we need to understand the impact of robots’ behavior on the human brain. For this purpose, human behavioral and neural responses to the robot behavior should be quantified offering feedback on how to improve and adjust robot behavior. Under this premise, our approach is to use methods of experimental psychology and cognitive neuroscience to assess the human’s reception of a robot in human-robot interaction protocols. As an example of this approach, we report an adaptation of a classical paradigm of experimental cognitive psychology to a naturalistic human- robot interaction scenario. We show the feasibility of such an approach with a validation pilot study, which demonstrated that our design yielded a similar pattern of data to what has been previously observed in experiments within the area of cognitive psychology. Our approach allows for addressing specific mechanisms of human cognition that are elicited during human-robot interaction, and thereby, in a longer-term perspective, it will allow for designing robots that are well- attuned to the workings of the human brain.


2020 ◽  
Vol 142 (6) ◽  
Author(s):  
Yu She ◽  
Siyang Song ◽  
Hai-Jun Su ◽  
Junmin Wang

Abstract In this paper, we study the effects of mechanical compliance on safety in physical human–robot interaction (pHRI). More specifically, we compare the effect of joint compliance and link compliance on the impact force assuming a contact occurred between a robot and a human head. We first establish pHRI system models that are composed of robot dynamics, an impact contact model, and head dynamics. These models are validated by Simscape simulation. By comparing impact results with a robotic arm made of a compliant link (CL) and compliant joint (CJ), we conclude that the CL design produces a smaller maximum impact force given the same lateral stiffness as well as other physical and geometric parameters. Furthermore, we compare the variable stiffness joint (VSJ) with the variable stiffness link (VSL) for various actuation parameters and design parameters. While decreasing stiffness of CJs cannot effectively reduce the maximum impact force, CL design is more effective in reducing impact force by varying the link stiffness. We conclude that the CL design potentially outperforms the CJ design in addressing safety in pHRI and can be used as a promising alternative solution to address the safety constraints in pHRI.


Robotics ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 18 ◽  
Author(s):  
Younsse Ayoubi ◽  
Med Laribi ◽  
Said Zeghloul ◽  
Marc Arsicault

Unlike “classical” industrial robots, collaborative robots, known as cobots, implement a compliant behavior. Cobots ensure a safe force control in a physical interaction scenario within unknown environments. In this paper, we propose to make serial robots intrinsically compliant to guarantee safe physical human–robot interaction (pHRI), via our novel designed device called V2SOM, which stands for Variable Stiffness Safety-Oriented Mechanism. As its name indicates, V2SOM aims at making physical human–robot interaction safe, thanks to its two basic functioning modes—high stiffness mode and low stiffness mode. The first mode is employed for normal operational routines. In contrast, the low stiffness mode is suitable for the safe absorption of any potential blunt shock with a human. The transition between the two modes is continuous to maintain a good control of the V2SOM-based cobot in the case of a fast collision. V2SOM presents a high inertia decoupling capacity which is a necessary condition for safe pHRI without compromising the robot’s dynamic performances. Two safety criteria of pHRI were considered for performance evaluations, namely, the impact force (ImpF) criterion and the head injury criterion (HIC) for, respectively, the external and internal damage evaluation during blunt shocks.


2020 ◽  
Vol 10 (1) ◽  
pp. 1-7 ◽  
Author(s):  
David Feil-Seifer ◽  
Kerstin S. Haring ◽  
Silvia Rossi ◽  
Alan R. Wagner ◽  
Tom Williams

2021 ◽  
Vol 10 (2) ◽  
pp. 1-32
Author(s):  
Leimin Tian ◽  
Sharon Oviatt

Robotic applications have entered various aspects of our lives, such as health care and educational services. In such Human-robot Interaction (HRI), trust and mutual adaption are established and maintained through a positive social relationship between a user and a robot. This social relationship relies on the perceived competence of a robot on the social-emotional dimension. However, because of technical limitations and user heterogeneity, current HRI is far from error-free, especially when a system leaves controlled lab environments and is applied to in-the-wild conditions. Errors in HRI may either degrade a user’s perception of a robot’s capability in achieving a task (defined as performance errors in this work) or degrade a user’s perception of a robot’s socio-affective competence (defined as social errors in this work). The impact of these errors and effective strategies to handle such an impact remains an open question. We focus on social errors in HRI in this work. In particular, we identify the major attributes of perceived socio-affective competence by reviewing human social interaction studies and HRI error studies. This motivates us to propose a taxonomy of social errors in HRI. We then discuss the impact of social errors situated in three representative HRI scenarios. This article provides foundations for a systematic analysis of the social-emotional dimension of HRI. The proposed taxonomy of social errors encourages the development of user-centered HRI systems, designed to offer positive and adaptive interaction experiences and improved interaction outcomes.


Sign in / Sign up

Export Citation Format

Share Document