High-Level Commands in Human-Robot Interaction for Search and Rescue

Author(s):  
Alain Caltieri ◽  
Francesco Amigoni
Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-16
Author(s):  
Maurice Lamb ◽  
Patrick Nalepka ◽  
Rachel W. Kallen ◽  
Tamara Lorenz ◽  
Steven J. Harrison ◽  
...  

Interactive or collaborative pick-and-place tasks occur during all kinds of daily activities, for example, when two or more individuals pass plates, glasses, and utensils back and forth between each other when setting a dinner table or loading a dishwasher together. In the near future, participation in these collaborative pick-and-place tasks could also include robotic assistants. However, for human-machine and human-robot interactions, interactive pick-and-place tasks present a unique set of challenges. A key challenge is that high-level task-representational algorithms and preplanned action or motor programs quickly become intractable, even for simple interaction scenarios. Here we address this challenge by introducing a bioinspired behavioral dynamic model of free-flowing cooperative pick-and-place behaviors based on low-dimensional dynamical movement primitives and nonlinear action selection functions. Further, we demonstrate that this model can be successfully implemented as an artificial agent control architecture to produce effective and robust human-like behavior during human-agent interactions. Participants were unable to explicitly detect whether they were working with an artificial (model controlled) agent or another human-coactor, further illustrating the potential effectiveness of the proposed modeling approach for developing systems of robust real/embodied human-robot interaction more generally.


2021 ◽  
Author(s):  
Stefano Dalla Gasperina ◽  
Valeria Longatelli ◽  
Francesco Braghin ◽  
Alessandra Laura Giulia Pedrocchi ◽  
Marta Gandolla

Abstract Background: Appropriate training modalities for post-stroke upper-limb rehabilitation are key features for effective recovery after the acute event. This work presents a novel human-robot cooperative control framework that promotes compliant motion and renders different high-level human-robot interaction rehabilitation modalities under a unified low-level control scheme. Methods: The presented control law is based on a loadcell-based impedance controller provided with positive-feedback compensation terms for disturbances rejection and dynamics compensation. We developed an elbow flexion-extension experimental setup, and we conducted experiments to evaluate the controller performances. Seven high-level modalities, characterized by different levels of (i) impedance-based corrective assistance, (ii) weight counterbalance assistance, and (iii) resistance, have been defined and tested with 14 healthy volunteers.Results: The unified controller demonstrated suitability to promote good transparency and render compliant and high-impedance behavior at the joint. Superficial electromyography results showed different muscular activation patterns according to the rehabilitation modalities. Results suggested to avoid weight counterbalance assistance, since it could induce different motor relearning with respect to purely impedance-based corrective strategies. Conclusion: We proved that the proposed control framework could implement different physical human-robot interaction modalities and promote the assist-as-needed paradigm, helping the user to accomplish the task, while maintaining physiological muscular activation patterns. Future insights involve the extension to multiple degrees of freedom robots and the investigation of an adaptation control law that makes the controller learn and adapt in a therapist-like manner.


interactions ◽  
2005 ◽  
Vol 12 (2) ◽  
pp. 39-41 ◽  
Author(s):  
Jill L. Drury ◽  
Holly A. Yanco ◽  
Jean Scholtz

2020 ◽  
Vol 17 (3) ◽  
pp. 172988142092529
Author(s):  
Junhao Xiao ◽  
Pan Wang ◽  
Huimin Lu ◽  
Hui Zhang

Human–robot interaction is a vital part of human–robot collaborative space exploration, which bridges the high-level decision and path planning intelligence of human and the accurate sensing and modelling ability of the robot. However, most conventional human–robot interaction approaches rely on video streams for the operator to understand the robot’s surrounding, which lacks situational awareness and force the operator to be stressed and fatigued. This research aims to improve efficiency and promote the natural level of interaction for human–robot collaboration. We present a human–robot interaction method based on real-time mapping and online virtual reality visualization, which is implemented and verified for rescue robotics. At the robot side, a dense point cloud map is built in real-time by LiDAR-IMU tightly fusion; the resulting map is further transformed into three-dimensional normal distributions transform representation. Wireless communication is employed to transmit the three-dimensional normal distributions transform map to the remote control station in an incremental manner. At the remote control station, the received map is rendered in virtual reality using parameterized ellipsoid cells. The operator controls the robot with three modes. In complex areas, the operator can use interactive devices to give low-level motion commands. In the less unstructured region, the operator can specify a path or even a target point. Afterwards, the robot follows the path or navigates to the target point autonomously. In other words, these two modes rely more on the robot’s autonomy. By virtue of virtual reality visualization, the operator can have a more comprehensive understanding of the space to be explored. In this case, the high-level decision and path planning intelligence of human and the accurate sensing and modelling ability of the robot can be well integrated as a whole. Although the method is proposed for rescue robots, it can also be used in other out-of-sight teleoperation-based human–robot collaboration systems, including but not limited to manufacturing, space, undersea, surgery, agriculture and military operations.


Author(s):  
Robin R. Murphy ◽  
Jennifer L. Burke

The Center for Robot-Assisted Search and Rescue has collected data at three responses (World Trade Center, Hurricane Charley, and the La Conchita mudslide) and nine high fidelity field exercises. Our results can be distilled into four lessons. First, building situation awareness, not autonomous navigation, is the major bottleneck in robot autonomy. Most of the robotics literature assumes a single operator single robot (SOSR), while our work shows that two operators working together are nine times more likely to find a victim. Second, human-robot interaction should not be thought of how to control the robot but rather how a team of experts can exploit the robot as an active information source. The third lesson is that team members use shared visual information to build shared mental models and facilitate team coordination. This suggests that high bandwidth, reliable communications will be necessary for effective teamwork. Fourth, victims and rescuers in close proximity to the robots respond to the robot socially. We conclude with observations about the general challenges in human-robot interaction.


Sign in / Sign up

Export Citation Format

Share Document