scholarly journals Architecture for Safe Human-Robot Collaboration: Multi-Modal Communication in Virtual Reality for Efficient Task Execution

Author(s):  
Beibei Shu ◽  
Gabor Sziebig ◽  
Roel Pieters
Author(s):  
Roberta Etzi ◽  
Siyuan Huang ◽  
Giulia Wally Scurati ◽  
Shilei Lyu ◽  
Francesco Ferrise ◽  
...  

Abstract The use of collaborative robots in the manufacturing industry has widely spread in the last decade. In order to be efficient, the human-robot collaboration needs to be properly designed by also taking into account the operator’s psychophysiological reactions. Virtual Reality can be used as a tool to simulate human-robot collaboration in a safe and cheap way. Here, we present a virtual collaborative platform in which the human operator and a simulated robot coordinate their actions to accomplish a simple assembly task. In this study, the robot moved slowly or more quickly in order to assess the effect of its velocity on the human’s responses. Ten participants tested this application by using an Oculus Rift head-mounted display; ARTracking cameras and a Kinect system were used to track the operator’s right arm movements and hand gestures respectively. Performance, user experience, and physiological responses were recorded. The results showed that while humans’ performances and evaluations varied as a function of the robot’s velocity, no differences were found in the physiological responses. Taken together, these data highlight the relevance of the kinematic aspects of robot’s motion within a human-robot collaboration and provide valuable insights to further develop our virtual human-machine interactive platform.


2019 ◽  
Vol 10 (1) ◽  
pp. 318-329 ◽  
Author(s):  
Alexandre Angleraud ◽  
Quentin Houbre ◽  
Roel Pieters

AbstractRecent advances in robotics allow for collaboration between humans and machines in performing tasks at home or in industrial settings without harming the life of the user. While humans can easily adapt to each other and work in team, it is not as trivial for robots. In their case, interaction skills typically come at the cost of extensive programming and teaching. Besides, understanding the semantics of a task is necessary to work efficiently and react to changes in the task execution process. As a result, in order to achieve seamless collaboration, appropriate reasoning, learning skills and interaction capabilities are needed. For us humans, a cornerstone of our communication is language that we use to teach, coordinate and communicate. In this paper we thus propose a system allowing (i) to teach new action semantics based on the already available knowledge and (ii) to use natural language communication to resolve ambiguities that could arise while giving commands to the robot. Reasoning then allows new skills to be performed either autonomously or in collaboration with a human. Teaching occurs through a web application and motions are learned with physical demonstration of the robotic arm. We demonstrate the utility of our system in two scenarios and reflect upon the challenges that it introduces.


2018 ◽  
Vol 19 ◽  
pp. 164-170 ◽  
Author(s):  
Patrick Rückert ◽  
Laura Wohlfromm ◽  
Kirsten Tracht

2020 ◽  
Vol 17 (3) ◽  
pp. 172988142092529
Author(s):  
Junhao Xiao ◽  
Pan Wang ◽  
Huimin Lu ◽  
Hui Zhang

Human–robot interaction is a vital part of human–robot collaborative space exploration, which bridges the high-level decision and path planning intelligence of human and the accurate sensing and modelling ability of the robot. However, most conventional human–robot interaction approaches rely on video streams for the operator to understand the robot’s surrounding, which lacks situational awareness and force the operator to be stressed and fatigued. This research aims to improve efficiency and promote the natural level of interaction for human–robot collaboration. We present a human–robot interaction method based on real-time mapping and online virtual reality visualization, which is implemented and verified for rescue robotics. At the robot side, a dense point cloud map is built in real-time by LiDAR-IMU tightly fusion; the resulting map is further transformed into three-dimensional normal distributions transform representation. Wireless communication is employed to transmit the three-dimensional normal distributions transform map to the remote control station in an incremental manner. At the remote control station, the received map is rendered in virtual reality using parameterized ellipsoid cells. The operator controls the robot with three modes. In complex areas, the operator can use interactive devices to give low-level motion commands. In the less unstructured region, the operator can specify a path or even a target point. Afterwards, the robot follows the path or navigates to the target point autonomously. In other words, these two modes rely more on the robot’s autonomy. By virtue of virtual reality visualization, the operator can have a more comprehensive understanding of the space to be explored. In this case, the high-level decision and path planning intelligence of human and the accurate sensing and modelling ability of the robot can be well integrated as a whole. Although the method is proposed for rescue robots, it can also be used in other out-of-sight teleoperation-based human–robot collaboration systems, including but not limited to manufacturing, space, undersea, surgery, agriculture and military operations.


2021 ◽  
Vol 11 (16) ◽  
pp. 7340
Author(s):  
Dana Gutman ◽  
Samuel Olatunji ◽  
Yael Edan

This study explored how levels of automation (LOA) influence human robot collaboration when operating at different levels of workload. Two LOA modes were designed, implemented, and evaluated in an experimental collaborative assembly task setup for four levels of workload composed of a secondary task and task complexity. A user study conducted involving 80 participants was assessed through two constructs especially designed for the evaluation (quality of task execution and usability) and user preferences regarding the LOA modes. Results revealed that the quality of task execution and usability was better at high LOA for low workload. Most of participants also preferred high LOA when the workload increases. However, when complexity existed within the workload, most of the participants preferred the low LOA. The results reveal the benefits of high and low LOA in different workload situations. This study provides insights related to shared control designs and reveals the importance of considering different levels of workload as influenced by secondary tasks and task complexity when designing LOA in human–robot collaborations.


Sign in / Sign up

Export Citation Format

Share Document