scholarly journals I Am the Robot: Teen Collaboration in an Asymmetric, Virtual Reality Game

2022 ◽  
Vol 2 ◽  
Author(s):  
Elin A. Björling ◽  
Ada Kim ◽  
Katelynn Oleson ◽  
Patrícia Alves-Oliveira

Virtual reality (VR) offers potential as a collaborative tool for both technology design and human-robot interaction. We utilized a participatory, human-centered design (HCD) methodology to develop a collaborative, asymmetric VR game to explore teens’ perceptions of, and interactions with, social robots. Our paper illustrates three stages of our design process; ideation, prototyping, and usability testing with users. Through these stages we identified important design requirements for our mid-fidelity environment. We then describe findings from our pilot test of the mid-fidelity VR game with teens. Due to the unique asymmetric virtual reality design, we observed successful collaborations, and interesting collaboration styles across teens. This study highlights the potential for asymmetric VR as a collaborative design tool as well as an appropriate medium for successful teen-to-teen collaboration.

Author(s):  
Roberta Etzi ◽  
Siyuan Huang ◽  
Giulia Wally Scurati ◽  
Shilei Lyu ◽  
Francesco Ferrise ◽  
...  

Abstract The use of collaborative robots in the manufacturing industry has widely spread in the last decade. In order to be efficient, the human-robot collaboration needs to be properly designed by also taking into account the operator’s psychophysiological reactions. Virtual Reality can be used as a tool to simulate human-robot collaboration in a safe and cheap way. Here, we present a virtual collaborative platform in which the human operator and a simulated robot coordinate their actions to accomplish a simple assembly task. In this study, the robot moved slowly or more quickly in order to assess the effect of its velocity on the human’s responses. Ten participants tested this application by using an Oculus Rift head-mounted display; ARTracking cameras and a Kinect system were used to track the operator’s right arm movements and hand gestures respectively. Performance, user experience, and physiological responses were recorded. The results showed that while humans’ performances and evaluations varied as a function of the robot’s velocity, no differences were found in the physiological responses. Taken together, these data highlight the relevance of the kinematic aspects of robot’s motion within a human-robot collaboration and provide valuable insights to further develop our virtual human-machine interactive platform.


2020 ◽  
Author(s):  
Thomas Williams

In previous work, researchers in Human-Robot Interaction (HRI) have demonstrated that user trust in robots depends on effective and transparent communication. This may be particularly true forrobots used for transportation, due to user reliance on such robots for physical movement and safety. In this paper, we present the design of an experiment examining the importance of proactive communication by robotic wheelchairs, as compared to non-vehicular mobile robots, within a Virtual Reality (VR) environment. Furthermore, we describe the specific advantages – and limitations – of conducting this type of HRI experiment in VR.


Author(s):  
Robert E. Wendrich ◽  
Kris-Howard Chambers ◽  
Wadee Al-Halabi ◽  
Eric J. Seibel ◽  
Olaf Grevenstuk ◽  
...  

Hybrid Design Tool Environments (HDTE) allow designers and engineers to use real tangible tools and physical objects and/or artifacts to make and create real-time virtual representations and presentations on-the-fly. Manipulations of the real tangible objects (e.g., real wire mesh, clay, sketches, etc.) are translated into 2-D and/or 3-D digital CAD software and/or virtual instances. The HDTE is equipped with a Loosely Fitted Design Synthesizer (NXt-LFDS) to support this multi-user interaction and design processing. The current study explores for the first time, the feasibility of using a NXt-LFDS in a networked immersive multi-participant social virtual reality environment (VRE). Using Oculus Rift goggles and PC computers at each location linked via Skype, team members physically located in several countries had the illusion of being co-located in a single virtual world, where they used rawshaping technologies (RST) to design a woman’s purse in 3-D virtual representations. Hence, the possibility to print the purse out on the spot (i.e. anywhere within the networked loop) with a 2-D or 3D printer. Immersive affordable Virtual Reality (VR) technology (and 3-D AM) are in the process of becoming commercially available and widely used by mainstream consumers, a major development that could transform the collaborative design process. The results of the current feasibility study suggests that designing products may become considerably more individualized within collaborative multi-user settings and less inhibited during in the coming ‘Diamond Age’ [1] of VR, collaborative networks and with profound implications for the design (e.g. fashion) and engineering industry. This paper presents the proposed system architecture, a collaborative use-case scenario, and preliminary results of the interaction, coordination, cooperation, and communication with immersive VR.


2020 ◽  
Vol 17 (3) ◽  
pp. 172988142092529
Author(s):  
Junhao Xiao ◽  
Pan Wang ◽  
Huimin Lu ◽  
Hui Zhang

Human–robot interaction is a vital part of human–robot collaborative space exploration, which bridges the high-level decision and path planning intelligence of human and the accurate sensing and modelling ability of the robot. However, most conventional human–robot interaction approaches rely on video streams for the operator to understand the robot’s surrounding, which lacks situational awareness and force the operator to be stressed and fatigued. This research aims to improve efficiency and promote the natural level of interaction for human–robot collaboration. We present a human–robot interaction method based on real-time mapping and online virtual reality visualization, which is implemented and verified for rescue robotics. At the robot side, a dense point cloud map is built in real-time by LiDAR-IMU tightly fusion; the resulting map is further transformed into three-dimensional normal distributions transform representation. Wireless communication is employed to transmit the three-dimensional normal distributions transform map to the remote control station in an incremental manner. At the remote control station, the received map is rendered in virtual reality using parameterized ellipsoid cells. The operator controls the robot with three modes. In complex areas, the operator can use interactive devices to give low-level motion commands. In the less unstructured region, the operator can specify a path or even a target point. Afterwards, the robot follows the path or navigates to the target point autonomously. In other words, these two modes rely more on the robot’s autonomy. By virtue of virtual reality visualization, the operator can have a more comprehensive understanding of the space to be explored. In this case, the high-level decision and path planning intelligence of human and the accurate sensing and modelling ability of the robot can be well integrated as a whole. Although the method is proposed for rescue robots, it can also be used in other out-of-sight teleoperation-based human–robot collaboration systems, including but not limited to manufacturing, space, undersea, surgery, agriculture and military operations.


Sign in / Sign up

Export Citation Format

Share Document