Understanding human-robot interaction in virtual reality

Author(s):  
Oliver Liu ◽  
Daniel Rakita ◽  
Bilge Mutlu ◽  
Michael Gleicher
Author(s):  
Roberta Etzi ◽  
Siyuan Huang ◽  
Giulia Wally Scurati ◽  
Shilei Lyu ◽  
Francesco Ferrise ◽  
...  

Abstract The use of collaborative robots in the manufacturing industry has widely spread in the last decade. In order to be efficient, the human-robot collaboration needs to be properly designed by also taking into account the operator’s psychophysiological reactions. Virtual Reality can be used as a tool to simulate human-robot collaboration in a safe and cheap way. Here, we present a virtual collaborative platform in which the human operator and a simulated robot coordinate their actions to accomplish a simple assembly task. In this study, the robot moved slowly or more quickly in order to assess the effect of its velocity on the human’s responses. Ten participants tested this application by using an Oculus Rift head-mounted display; ARTracking cameras and a Kinect system were used to track the operator’s right arm movements and hand gestures respectively. Performance, user experience, and physiological responses were recorded. The results showed that while humans’ performances and evaluations varied as a function of the robot’s velocity, no differences were found in the physiological responses. Taken together, these data highlight the relevance of the kinematic aspects of robot’s motion within a human-robot collaboration and provide valuable insights to further develop our virtual human-machine interactive platform.


2020 ◽  
Author(s):  
Thomas Williams

In previous work, researchers in Human-Robot Interaction (HRI) have demonstrated that user trust in robots depends on effective and transparent communication. This may be particularly true forrobots used for transportation, due to user reliance on such robots for physical movement and safety. In this paper, we present the design of an experiment examining the importance of proactive communication by robotic wheelchairs, as compared to non-vehicular mobile robots, within a Virtual Reality (VR) environment. Furthermore, we describe the specific advantages – and limitations – of conducting this type of HRI experiment in VR.


2020 ◽  
Vol 17 (3) ◽  
pp. 172988142092529
Author(s):  
Junhao Xiao ◽  
Pan Wang ◽  
Huimin Lu ◽  
Hui Zhang

Human–robot interaction is a vital part of human–robot collaborative space exploration, which bridges the high-level decision and path planning intelligence of human and the accurate sensing and modelling ability of the robot. However, most conventional human–robot interaction approaches rely on video streams for the operator to understand the robot’s surrounding, which lacks situational awareness and force the operator to be stressed and fatigued. This research aims to improve efficiency and promote the natural level of interaction for human–robot collaboration. We present a human–robot interaction method based on real-time mapping and online virtual reality visualization, which is implemented and verified for rescue robotics. At the robot side, a dense point cloud map is built in real-time by LiDAR-IMU tightly fusion; the resulting map is further transformed into three-dimensional normal distributions transform representation. Wireless communication is employed to transmit the three-dimensional normal distributions transform map to the remote control station in an incremental manner. At the remote control station, the received map is rendered in virtual reality using parameterized ellipsoid cells. The operator controls the robot with three modes. In complex areas, the operator can use interactive devices to give low-level motion commands. In the less unstructured region, the operator can specify a path or even a target point. Afterwards, the robot follows the path or navigates to the target point autonomously. In other words, these two modes rely more on the robot’s autonomy. By virtue of virtual reality visualization, the operator can have a more comprehensive understanding of the space to be explored. In this case, the high-level decision and path planning intelligence of human and the accurate sensing and modelling ability of the robot can be well integrated as a whole. Although the method is proposed for rescue robots, it can also be used in other out-of-sight teleoperation-based human–robot collaboration systems, including but not limited to manufacturing, space, undersea, surgery, agriculture and military operations.


2020 ◽  
Vol 11 (1) ◽  
pp. 379-389
Author(s):  
Angelina Aleksandrovich ◽  
Leonardo Mariano Gomes

AbstractThis research explores multisensory sexual arousal in men and women, and how it can be implemented and shared between multiple individuals in Virtual Reality (VR). This is achieved through the stimulation of human senses with immersive technology including visual, olfactory, auditory, and haptic triggers. Participants are invited to VR to test various sensory triggers and assess them as sexually arousing or not. A literature review on VR experiments related to sexuality, the concepts of perception and multisensory experiments, and data collected from self-reports was used to conclude. The goal of this research is to establish that sexual arousal is a multisensory event that may or may not be linked to the presence or thought of the intended object of desire (sexual partner). By examining what stimulates arousal, we better understand the multisensory capacity of humans, leading not only to richer sexual experiences but also to the further development of wearable sextech products, soft robotics, and multisensory learning machines. This understanding helps with other research related to human-robot interaction, affection, detection, and transmission in both physical and virtual realities, and how VR technology can help to design a new generation of sex robots.


Sign in / Sign up

Export Citation Format

Share Document