Up from the Rubble: Lessons Learned about HRI from Search and Rescue

Author(s):  
Robin R. Murphy ◽  
Jennifer L. Burke

The Center for Robot-Assisted Search and Rescue has collected data at three responses (World Trade Center, Hurricane Charley, and the La Conchita mudslide) and nine high fidelity field exercises. Our results can be distilled into four lessons. First, building situation awareness, not autonomous navigation, is the major bottleneck in robot autonomy. Most of the robotics literature assumes a single operator single robot (SOSR), while our work shows that two operators working together are nine times more likely to find a victim. Second, human-robot interaction should not be thought of how to control the robot but rather how a team of experts can exploit the robot as an active information source. The third lesson is that team members use shared visual information to build shared mental models and facilitate team coordination. This suggests that high bandwidth, reliable communications will be necessary for effective teamwork. Fourth, victims and rescuers in close proximity to the robots respond to the robot socially. We conclude with observations about the general challenges in human-robot interaction.

2021 ◽  
Vol 8 ◽  
Author(s):  
Alisha Bevins ◽  
Brittany A. Duncan

This article presents an understanding of naive users’ perception of the communicative nature of unmanned aerial vehicle (UAV) motions refined through an iterative series of studies. This includes both what people believe the UAV is trying to communicate, and how they expect to respond through physical action or emotional response. Previous work in this area prioritized gestures from participants to the vehicle or augmenting the vehicle with additional communication modalities, rather than communicating without clear definitions of the states attempting to be conveyed. In an attempt to elicit more concrete states and better understand specific motion perception, this work includes multiple iterations of state creation, flight path refinement, and label assignment. The lessons learned in this work will be applicable broadly to those interested in defining flight paths, and within the human-robot interaction community as a whole, as it provides a base for those seeking to communicate using non-anthropomorphic robots. We found that the Negative Attitudes towards Robots Scale (NARS) can be an indicator of how a person is likely to react to a UAV, the emotional content they are likely to perceive from a message being conveyed, and it is an indicator for the personality characteristics they are likely to project upon the UAV. We also see that people commonly associate motions from other non-verbal communication situations onto UAVs. Flight specific recommendations are to use a dynamic retreating motion from a person to encourage following, use a perpendicular motion to their field of view for blocking, simple descending motion for landing, and to use either no motion or large altitude changes to encourage watching. Overall, this research explores the communication from the UAV to the bystander through its motion, to see how people respond physically and emotionally.


2019 ◽  
Vol 374 (1771) ◽  
pp. 20180026 ◽  
Author(s):  
Hatice Gunes ◽  
Oya Celiktutan ◽  
Evangelos Sariyanidi

Communication with humans is a multi-faceted phenomenon where the emotions, personality and non-verbal behaviours, as well as the verbal behaviours, play a significant role, and human–robot interaction (HRI) technologies should respect this complexity to achieve efficient and seamless communication. In this paper, we describe the design and execution of five public demonstrations made with two HRI systems that aimed at automatically sensing and analysing human participants’ non-verbal behaviour and predicting their facial action units, facial expressions and personality in real time while they interacted with a small humanoid robot. We describe an overview of the challenges faced together with the lessons learned from those demonstrations in order to better inform the science and engineering fields to design and build better robots with more purposeful interaction capabilities. This article is part of the theme issue ‘From social brains to social robots: applying neurocognitive insights to human–robot interaction’.


Author(s):  
James Ballantyne ◽  
Edward Johns ◽  
Salman Valibeik ◽  
Charence Wong ◽  
Guang-Zhong Yang

interactions ◽  
2005 ◽  
Vol 12 (2) ◽  
pp. 39-41 ◽  
Author(s):  
Jill L. Drury ◽  
Holly A. Yanco ◽  
Jean Scholtz

Sign in / Sign up

Export Citation Format

Share Document