scholarly journals The Vicarios Virtual Reality Interface for Remote Robotic Teleoperation

2021 ◽  
Vol 101 (4) ◽  
Author(s):  
Abdeldjallil Naceri ◽  
Dario Mazzanti ◽  
Joao Bimbo ◽  
Yonas T. Tefera ◽  
Domenico Prattichizzo ◽  
...  

AbstractIntuitive interaction is the cornerstone of accurate and effective performance in remote robotic teleoperation. It requires high-fidelity in control actions as well as perception (vision, haptic, and other sensory feedback) of the remote environment. This paper presents Vicarios, a Virtual Reality (VR) based interface with the aim of facilitating intuitive real-time remote teleoperation, while utilizing the inherent benefits of VR, including immersive visualization, freedom of user viewpoint selection, and fluidity of interaction through natural action interfaces. Vicarios aims to enhance the situational awareness, using the concept of viewpoint-independent mapping between the operator and the remote scene, thereby giving the operator better control in the perception-action loop. The article describes the overall system of Vicarios, with its software, hardware, and communication framework. A comparative user study quantifies the impact of the interface and its features, including immersion and instantaneous user viewpoint changes, termed “teleporting”, on users’ performance. The results show that users’ performance with the VR-based interface was either similar to or better than the baseline condition of traditional stereo video feedback, approving the realistic nature of the Vicarios interface. Furthermore, including the teleporting feature in VR significantly improved participants’ performance and their appreciation for it, which was evident in the post-questionnaire results. Vicarios capitalizes on the intuitiveness and flexibility of VR to improve accuracy in remote teleoperation.

Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1814
Author(s):  
Yuzhao Liu ◽  
Yuhan Liu ◽  
Shihui Xu ◽  
Kelvin Cheng ◽  
Soh Masuko ◽  
...  

Despite the convenience offered by e-commerce, online apparel shopping presents various product-related risks, as consumers can neither physically see nor try products on themselves. Augmented reality (AR) and virtual reality (VR) technologies have been used to improve the shopping online experience. Therefore, we propose an AR- and VR-based try-on system that provides users a novel shopping experience where they can view garments fitted onto their personalized virtual body. Recorded personalized motions are used to allow users to dynamically interact with their dressed virtual body in AR. We conducted two user studies to compare the different roles of VR- and AR-based try-ons and validate the impact of personalized motions on the virtual try-on experience. In the first user study, the mobile application with the AR- and VR-based try-on is compared to a traditional e-commerce interface. In the second user study, personalized avatars with pre-defined motion and personalized motion is compared to a personalized no-motion avatar with AR-based try-on. The result shows that AR- and VR-based try-ons can positively influence the shopping experience, compared with the traditional e-commerce interface. Overall, AR-based try-on provides a better and more realistic garment visualization than VR-based try-on. In addition, we found that personalized motions do not directly affect the user’s shopping experience.


2021 ◽  
Author(s):  
Byeol Kim ◽  
Phong Nguyen ◽  
Yue-Hin Loke ◽  
Vincent Cleveland ◽  
Paige Mass ◽  
...  

BACKGROUND Patients with single ventricle heart defects receives three stages of surgeries culminating in the Fontan surgery. During the Fontan surgery, a vascular graft is sutured between the inferior vena cava and pulmonary artery to divert deoxygenated blood flow to the lungs via passive flow. Customizing the graft configuration can maximize the long-term benefits of Fontan surgery. However, planning patient-specific surgery has several challenges including the ability for physicians to customize grafts and evaluate its hemodynamic performance. OBJECTIVE The aim of this study was to develop a virtual reality (VR) Fontan graft modeling and evaluation software for physicians. User study was performed to achieve three additional goals: 1) evaluate the software when used by medical doctors and engineers, 2) identify if doctors have a baseline intuition about hemodynamic performance of Fontan grafts in a VR setting, and 3) explore the impact of viewing hemodynamic simulation results in numerical and graphical formats. METHODS A total of 5 medical professionals including 4 physicians (1 fourth-year resident, 1 third-year cardiac fellow, 1 pediatric intensivist, and 1 pediatric cardiac surgeon) and 1 biomedical engineer voluntarily participated in the study. The study was pre-scripted to minimize the variability of the interactions between the experimenter and the participants. Unless a participant was familiar with the Fontan surgery, a quick information session was provided at the start. Then, all participants were trained to use the VR gear and our software, CorFix. Each participant designed one bifurcated and one tube-shaped Fontan graft for a single patient. Then a hemodynamic performance evaluation was completed, allowing the participants to further modify their tube-shaped design. The design time and hemodynamic performance for each graft design were recorded. At the end of the study, all participants were provided surveys to evaluate the usability and learnability of the software and rate the intensity of VR sickness. RESULTS The average time for creating one bifurcated and one tube-shaped grafts after a single 10-minute training were 13.40 and 5.49 minutes, accordingly. Three out of 5 bifurcated and 1 out of 5 tube-shaped graft designs were in the benchmark range of hepatic flow distribution. Reviewing hemodynamic performance results and modifying the tube-shaped design took an average time of 2.92 minutes. Participants who modified their tube-shaped graft designs were able to improve the non-physiologic wall shear stress percentage by 7.02%. All tube-shaped graft designs improved wall shear stress compared the native surgical case of the patient. None of the designs met the benchmark indexed power loss. CONCLUSIONS VR graft design software can quickly be taught to physicians without any engineering background and VR experience. Improving the system of CorFix could improve performance of the users in customizing and optimizing grafts for patients. With graphical visualization, physicians were able to improve wall shear stress of a tube-shaped graft, lowering the chance of thrombosis. Bifurcated graft designs showed potential strength in better flow split to the lungs, reducing the risk for pulmonary arteriovenous malformations.


Author(s):  
Robin Horst ◽  
Ramtin Naraghi-Taghi-Off ◽  
Linda Rau ◽  
Ralf Dörner

AbstractEvery Virtual Reality (VR) experience has to end at some point. While there already exist concepts to design transitions for users to enter a virtual world, their return from the physical world should be considered, as well, as it is a part of the overall VR experience. We call the latter outro-transitions. In contrast to offboarding of VR experiences, that takes place after taking off VR hardware (e.g., HMDs), outro-transitions are still part of the immersive experience. Such transitions occur more frequently when VR is experienced periodically and for only short times. One example where transition techniques are necessary is in an auditorium where the audience has individual VR headsets available, for example, in a presentation using PowerPoint slides together with brief VR experiences sprinkled between the slides. The audience must put on and take off HMDs frequently every time they switch from common presentation media to VR and back. In a such a one-to-many VR scenario, it is challenging for presenters to explore the process of multiple people coming back from the virtual to the physical world at once. Direct communication may be constrained while VR users are wearing an HMD. Presenters need a tool to indicate them to stop the VR session and switch back to the slide presentation. Virtual visual cues can help presenters or other external entities (e.g., automated/scripted events) to request VR users to end a VR session. Such transitions become part of the overall experience of the audience and thus must be considered. This paper explores visual cues as outro-transitions from a virtual world back to the physical world and their utility to enable presenters to request VR users to end a VR session. We propose and investigate eight transition techniques. We focus on their usage in short consecutive VR experiences and include both established and novel techniques. The transition techniques are evaluated within a user study to draw conclusions on the effects of outro-transitions on the overall experience and presence of participants. We also take into account how long an outro-transition may take and how comfortable our participants perceived the proposed techniques. The study points out that they preferred non-interactive outro-transitions over interactive ones, except for a transition that allowed VR users to communicate with presenters. Furthermore, we explore the presenter-VR user relation within a presentation scenario that uses short VR experiences. The study indicates involving presenters that can stop a VR session was not only negligible but preferred by our participants.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4663
Author(s):  
Janaina Cavalcanti ◽  
Victor Valls ◽  
Manuel Contero ◽  
David Fonseca

An effective warning attracts attention, elicits knowledge, and enables compliance behavior. Game mechanics, which are directly linked to human desires, stand out as training, evaluation, and improvement tools. Immersive virtual reality (VR) facilitates training without risk to participants, evaluates the impact of an incorrect action/decision, and creates a smart training environment. The present study analyzes the user experience in a gamified virtual environment of risks using the HTC Vive head-mounted display. The game was developed in the Unreal game engine and consisted of a walk-through maze composed of evident dangers and different signaling variables while user action data were recorded. To demonstrate which aspects provide better interaction, experience, perception and memory, three different warning configurations (dynamic, static and smart) and two different levels of danger (low and high) were presented. To properly assess the impact of the experience, we conducted a survey about personality and knowledge before and after using the game. We proceeded with the qualitative approach by using questions in a bipolar laddering assessment that was compared with the recorded data during the game. The findings indicate that when users are engaged in VR, they tend to test the consequences of their actions rather than maintaining safety. The results also reveal that textual signal variables are not accessed when users are faced with the stress factor of time. Progress is needed in implementing new technologies for warnings and advance notifications to improve the evaluation of human behavior in virtual environments of high-risk surroundings.


Author(s):  
Jassim Happa ◽  
Ioannis Agrafiotis ◽  
Martin Helmhout ◽  
Thomas Bashford-Rogers ◽  
Michael Goldsmith ◽  
...  

In recent years, many tools have been developed to understand attacks that make use of visualization, but few examples aims to predict real-world consequences. We have developed a visualization tool that aims to improve decision support during attacks. Our tool visualizes propagation of risks from IDS and AV-alert data by relating sensor alerts to Business Process (BP) tasks and machine assets: an important capability gap present in many Security Operation Centres (SOCs) today. In this paper we present a user study in which we evaluate the tool's usability and ability to deliver situational awareness to the analyst. Ten analysts from seven SOCs performed carefully designed tasks related to understanding risks and prioritising recovery decisions. The study was conducted in laboratory conditions, with simulated attacks, and used a mixed-method approach to collect data from questionnaires, eyetracking and voice-recorded interviews. The findings suggest that providing analysts with situational awareness relating to business priorities can help them prioritise response strategies. Finally, we provide an in-depth discussion on the wider questions related to user studies in similar conditions as well as lessons learned from our user study and developing a visualization tool of this type.


2021 ◽  
Author(s):  
Valentin Holzwarth ◽  
Johannes Schneider ◽  
Joshua Handali ◽  
Joy Gisler ◽  
Christian Hirt ◽  
...  

AbstractInferring users’ perceptions of Virtual Environments (VEs) is essential for Virtual Reality (VR) research. Traditionally, this is achieved through assessing users’ affective states before and after being exposed to a VE, based on standardized, self-assessment questionnaires. The main disadvantage of questionnaires is their sequential administration, i.e., a user’s affective state is measured asynchronously to its generation within the VE. A synchronous measurement of users’ affective states would be highly favorable, e.g., in the context of adaptive systems. Drawing from nonverbal behavior research, we argue that behavioral measures could be a powerful approach to assess users’ affective states in VR. In this paper, we contribute by providing methods and measures evaluated in a user study involving 42 participants to assess a users’ affective states by measuring head movements during VR exposure. We show that head yaw significantly correlates with presence, mental and physical demand, perceived performance, and system usability. We also exploit the identified relationships for two practical tasks that are based on head yaw: (1) predicting a user’s affective state, and (2) detecting manipulated questionnaire answers, i.e., answers that are possibly non-truthful. We found that affective states can be predicted significantly better than a naive estimate for mental demand, physical demand, perceived performance, and usability. Further, manipulated or non-truthful answers can also be estimated significantly better than by a naive approach. These findings mark an initial step in the development of novel methods to assess user perception of VEs.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Géraldine Fauville ◽  
Anna C. M. Queiroz ◽  
Erika S. Woolsey ◽  
Jonathan W. Kelly ◽  
Jeremy N. Bailenson

AbstractResearch about vection (illusory self-motion) has investigated a wide range of sensory cues and employed various methods and equipment, including use of virtual reality (VR). However, there is currently no research in the field of vection on the impact of floating in water while experiencing VR. Aquatic immersion presents a new and interesting method to potentially enhance vection by reducing conflicting sensory information that is usually experienced when standing or sitting on a stable surface. This study compares vection, visually induced motion sickness, and presence among participants experiencing VR while standing on the ground or floating in water. Results show that vection was significantly enhanced for the participants in the Water condition, whose judgments of self-displacement were larger than those of participants in the Ground condition. No differences in visually induced motion sickness or presence were found between conditions. We discuss the implication of this new type of VR experience for the fields of VR and vection while also discussing future research questions that emerge from our findings.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 397
Author(s):  
Qimeng Zhang ◽  
Ji-Su Ban ◽  
Mingyu Kim ◽  
Hae Won Byun ◽  
Chang-Hun Kim

We propose a low-asymmetry interface to improve the presence of non-head-mounted-display (non-HMD) users in shared virtual reality (VR) experiences with HMD users. The low-asymmetry interface ensures that the HMD and non-HMD users’ perception of the VR environment is almost similar. That is, the point-of-view asymmetry and behavior asymmetry between HMD and non-HMD users are reduced. Our system comprises a portable mobile device as a visual display to provide a changing PoV for the non-HMD user and a walking simulator as an in-place walking detection sensor to enable the same level of realistic and unrestricted physical-walking-based locomotion for all users. Because this allows non-HMD users to experience the same level of visualization and free movement as HMD users, both of them can engage as the main actors in movement scenarios. Our user study revealed that the low-asymmetry interface enables non-HMD users to feel a presence similar to that of the HMD users when performing equivalent locomotion tasks in a virtual environment. Furthermore, our system can enable one HMD user and multiple non-HMD users to participate together in a virtual world; moreover, our experiments show that the non-HMD user satisfaction increases with the number of non-HMD participants owing to increased presence and enjoyment.


2020 ◽  
Vol 4 (4) ◽  
pp. 78
Author(s):  
Andoni Rivera Pinto ◽  
Johan Kildal ◽  
Elena Lazkano

In the context of industrial production, a worker that wants to program a robot using the hand-guidance technique needs that the robot is available to be programmed and not in operation. This means that production with that robot is stopped during that time. A way around this constraint is to perform the same manual guidance steps on a holographic representation of the digital twin of the robot, using augmented reality technologies. However, this presents the limitation of a lack of tangibility of the visual holograms that the user tries to grab. We present an interface in which some of the tangibility is provided through ultrasound-based mid-air haptics actuation. We report a user study that evaluates the impact that the presence of such haptic feedback may have on a pick-and-place task of the wrist of a holographic robot arm which we found to be beneficial.


Sign in / Sign up

Export Citation Format

Share Document