Human-Robot Interaction II: The Impact of Human Factors on Unmanned Systems

Author(s):  
Douglas J. Gillan
Robotica ◽  
2010 ◽  
Vol 29 (3) ◽  
pp. 421-432 ◽  
Author(s):  
R. E. Mohan ◽  
W. S. Wijesoma ◽  
C. A. A. Calderon ◽  
C. J. Zhou

SUMMARYEstimating robot performance in human robot teams is a vital problem in human robot interaction community. In a previous work, we presented extended neglect tolerance model for estimation of robot performance, where the human operator switches control between robots sequentially based on acceptable performance levels, taking into account any false alarms in human robot interactions. Task complexity is a key parameter that directly impacts the robot performance as well as the false alarms occurrences. In this paper, we validate the extended neglect tolerance model for two robot tasks of varying complexity levels. We also present the impact of task complexity on robot performance estimations and false alarms demands. Experiments were performed with real and virtual humanoid soccer robots across tele-operated and semi-autonomous modes of autonomy. Measured false alarm demand and robot performances were largely consistent with the extended neglect tolerance model predictions for both real and virtual robot experiments. Experiments also showed that the task complexity is directly proportional to false alarm demands and inversely proportional to robot performance.


2021 ◽  
Vol 8 ◽  
Author(s):  
Sebastian Zörner ◽  
Emy Arts ◽  
Brenda Vasiljevic ◽  
Ankit Srivastava ◽  
Florian Schmalzl ◽  
...  

As robots become more advanced and capable, developing trust is an important factor of human-robot interaction and cooperation. However, as multiple environmental and social factors can influence trust, it is important to develop more elaborate scenarios and methods to measure human-robot trust. A widely used measurement of trust in social science is the investment game. In this study, we propose a scaled-up, immersive, science fiction Human-Robot Interaction (HRI) scenario for intrinsic motivation on human-robot collaboration, built upon the investment game and aimed at adapting the investment game for human-robot trust. For this purpose, we utilize two Neuro-Inspired COmpanion (NICO) - robots and a projected scenery. We investigate the applicability of our space mission experiment design to measure trust and the impact of non-verbal communication. We observe a correlation of 0.43 (p=0.02) between self-assessed trust and trust measured from the game, and a positive impact of non-verbal communication on trust (p=0.0008) and robot perception for anthropomorphism (p=0.007) and animacy (p=0.00002). We conclude that our scenario is an appropriate method to measure trust in human-robot interaction and also to study how non-verbal communication influences a human’s trust in robots.


2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Davide De Tommaso ◽  
Ebru Baykara ◽  
Agnieszka Wykowska

Robots will soon enter social environments shared with humans. We need robots that are able to efficiently convey social signals during interactions. At the same time, we need to understand the impact of robots’ behavior on the human brain. For this purpose, human behavioral and neural responses to the robot behavior should be quantified offering feedback on how to improve and adjust robot behavior. Under this premise, our approach is to use methods of experimental psychology and cognitive neuroscience to assess the human’s reception of a robot in human-robot interaction protocols. As an example of this approach, we report an adaptation of a classical paradigm of experimental cognitive psychology to a naturalistic human- robot interaction scenario. We show the feasibility of such an approach with a validation pilot study, which demonstrated that our design yielded a similar pattern of data to what has been previously observed in experiments within the area of cognitive psychology. Our approach allows for addressing specific mechanisms of human cognition that are elicited during human-robot interaction, and thereby, in a longer-term perspective, it will allow for designing robots that are well- attuned to the workings of the human brain.


2020 ◽  
Vol 142 (6) ◽  
Author(s):  
Yu She ◽  
Siyang Song ◽  
Hai-Jun Su ◽  
Junmin Wang

Abstract In this paper, we study the effects of mechanical compliance on safety in physical human–robot interaction (pHRI). More specifically, we compare the effect of joint compliance and link compliance on the impact force assuming a contact occurred between a robot and a human head. We first establish pHRI system models that are composed of robot dynamics, an impact contact model, and head dynamics. These models are validated by Simscape simulation. By comparing impact results with a robotic arm made of a compliant link (CL) and compliant joint (CJ), we conclude that the CL design produces a smaller maximum impact force given the same lateral stiffness as well as other physical and geometric parameters. Furthermore, we compare the variable stiffness joint (VSJ) with the variable stiffness link (VSL) for various actuation parameters and design parameters. While decreasing stiffness of CJs cannot effectively reduce the maximum impact force, CL design is more effective in reducing impact force by varying the link stiffness. We conclude that the CL design potentially outperforms the CJ design in addressing safety in pHRI and can be used as a promising alternative solution to address the safety constraints in pHRI.


Sign in / Sign up

Export Citation Format

Share Document