scholarly journals 3D virtual reality vs. 2D desktop registration user interface comparison

PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258103
Author(s):  
Andreas Bueckle ◽  
Kilian Buehling ◽  
Patrick C. Shih ◽  
Katy Börner

Working with organs and extracted tissue blocks is an essential task in many medical surgery and anatomy environments. In order to prepare specimens from human donors for further analysis, wet-bench workers must properly dissect human tissue and collect metadata for downstream analysis, including information about the spatial origin of tissue. The Registration User Interface (RUI) was developed to allow stakeholders in the Human Biomolecular Atlas Program (HuBMAP) to register tissue blocks—i.e., to record the size, position, and orientation of human tissue data with regard to reference organs. The RUI has been used by tissue mapping centers across the HuBMAP consortium to register a total of 45 kidney, spleen, and colon tissue blocks, with planned support for 17 organs in the near future. In this paper, we compare three setups for registering one 3D tissue block object to another 3D reference organ (target) object. The first setup is a 2D Desktop implementation featuring a traditional screen, mouse, and keyboard interface. The remaining setups are both virtual reality (VR) versions of the RUI: VR Tabletop, where users sit at a physical desk which is replicated in virtual space; VR Standup, where users stand upright while performing their tasks. All three setups were implemented using the Unity game engine. We then ran a user study for these three setups involving 42 human subjects completing 14 increasingly difficult and then 30 identical tasks in sequence and reporting position accuracy, rotation accuracy, completion time, and satisfaction. All study materials were made available in support of future study replication, alongside videos documenting our setups. We found that while VR Tabletop and VR Standup users are about three times as fast and about a third more accurate in terms of rotation than 2D Desktop users (for the sequence of 30 identical tasks), there are no significant differences between the three setups for position accuracy when normalized by the height of the virtual kidney across setups. When extrapolating from the 2D Desktop setup with a 113-mm-tall kidney, the absolute performance values for the 2D Desktop version (22.6 seconds per task, 5.88 degrees rotation, and 1.32 mm position accuracy after 8.3 tasks in the series of 30 identical tasks) confirm that the 2D Desktop interface is well-suited for allowing users in HuBMAP to register tissue blocks at a speed and accuracy that meets the needs of experts performing tissue dissection. In addition, the 2D Desktop setup is cheaper, easier to learn, and more practical for wet-bench environments than the VR setups.

2019 ◽  
Vol 9 (2) ◽  
Author(s):  
Muhammad Nur Affendy Nor'a ◽  
Ajune Wanis Ismail

Application that adopts collaborative system allows multiple users to interact with other users in the same virtual space either in Virtual Reality (VR) or Augmented Reality (AR). This paper aims to integrate the VR and AR space in a Collaborative User Interface that enables the user to cooperate with other users in a different type of interfaces in a single shared space manner. The gesture interaction technique is proposed as the interaction tool in both of the virtual spaces as it can provide a more natural gesture interaction when interacting with the virtual object. The integration of VR and AR space provide a cross-discipline shared data interchange through the network protocol of client-server architecture.


2020 ◽  
Vol 36 (10-12) ◽  
pp. 2117-2128
Author(s):  
Ryo Shimamura ◽  
Qi Feng ◽  
Yuki Koyama ◽  
Takayuki Nakatsuka ◽  
Satoru Fukayama ◽  
...  

Abstract We present a novel concept audio–visual object removal in 360-degree videos, in which a target object in a 360-degree video is removed in both the visual and auditory domains synchronously. Previous methods have solely focused on the visual aspect of object removal using video inpainting techniques, resulting in videos with unreasonable remaining sounds corresponding to the removed objects. We propose a solution which incorporates direction acquired during the video inpainting process into the audio removal process. More specifically, our method identifies the sound corresponding to the visually tracked target object and then synthesizes a three-dimensional sound field by subtracting the identified sound from the input 360-degree video. We conducted a user study showing that our multi-modal object removal supporting both visual and auditory domains could significantly improve the virtual reality experience, and our method could generate sufficiently synchronous, natural and satisfactory 360-degree videos.


2019 ◽  
Vol 9 (22) ◽  
pp. 4861 ◽  
Author(s):  
Hind Kharoub ◽  
Mohammed Lataifeh ◽  
Naveed Ahmed

This work presents a novel design of a new 3D user interface for an immersive virtual reality desktop and a new empirical analysis of the proposed interface using three interaction modes. The proposed novel dual-layer 3D user interface allows for user interactions with multiple screens portrayed within a curved 360-degree effective field of view available for the user. Downward gaze allows the user to raise the interaction layer that facilitates several traditional desktop tasks. The 3D user interface is analyzed using three different interaction modes, point-and-click, controller-based direct manipulation, and a gesture-based user interface. A comprehensive user study is performed within a mixed-methods approach for the usability and user experience analysis of all three user interaction modes. Each user interaction is quantitatively and qualitatively analyzed for simple and compound tasks in both standing and seated positions. The crafted mixed approach for this study allows to collect, evaluate, and validate the viability of the new 3D user interface. The results are used to draw conclusions about the suitability of the interaction modes for a variety of tasks in an immersive Virtual Reality 3D desktop environment.


Author(s):  
Robin Horst ◽  
Ramtin Naraghi-Taghi-Off ◽  
Linda Rau ◽  
Ralf Dörner

AbstractEvery Virtual Reality (VR) experience has to end at some point. While there already exist concepts to design transitions for users to enter a virtual world, their return from the physical world should be considered, as well, as it is a part of the overall VR experience. We call the latter outro-transitions. In contrast to offboarding of VR experiences, that takes place after taking off VR hardware (e.g., HMDs), outro-transitions are still part of the immersive experience. Such transitions occur more frequently when VR is experienced periodically and for only short times. One example where transition techniques are necessary is in an auditorium where the audience has individual VR headsets available, for example, in a presentation using PowerPoint slides together with brief VR experiences sprinkled between the slides. The audience must put on and take off HMDs frequently every time they switch from common presentation media to VR and back. In a such a one-to-many VR scenario, it is challenging for presenters to explore the process of multiple people coming back from the virtual to the physical world at once. Direct communication may be constrained while VR users are wearing an HMD. Presenters need a tool to indicate them to stop the VR session and switch back to the slide presentation. Virtual visual cues can help presenters or other external entities (e.g., automated/scripted events) to request VR users to end a VR session. Such transitions become part of the overall experience of the audience and thus must be considered. This paper explores visual cues as outro-transitions from a virtual world back to the physical world and their utility to enable presenters to request VR users to end a VR session. We propose and investigate eight transition techniques. We focus on their usage in short consecutive VR experiences and include both established and novel techniques. The transition techniques are evaluated within a user study to draw conclusions on the effects of outro-transitions on the overall experience and presence of participants. We also take into account how long an outro-transition may take and how comfortable our participants perceived the proposed techniques. The study points out that they preferred non-interactive outro-transitions over interactive ones, except for a transition that allowed VR users to communicate with presenters. Furthermore, we explore the presenter-VR user relation within a presentation scenario that uses short VR experiences. The study indicates involving presenters that can stop a VR session was not only negligible but preferred by our participants.


2021 ◽  
Vol 11 (7) ◽  
pp. 3090
Author(s):  
Sangwook Yoo ◽  
Cheongho Lee ◽  
Seongah Chin

To experience a real soap bubble show, materials and tools are required, as are skilled performers who produce the show. However, in a virtual space where spatial and temporal constraints do not exist, bubble art can be performed without real materials and tools to give a sense of immersion. For this, the realistic expression of soap bubbles is an interesting topic for virtual reality (VR). However, the current performance of VR soap bubbles is not satisfying the high expectations of users. Therefore, in this study, we propose a physically based approach for reproducing the shape of the bubble by calculating the measured parameters required for bubble modeling and the physical motion of bubbles. In addition, we applied the change in the flow of the surface of the soap bubble measured in practice to the VR rendering. To improve users’ VR experience, we propose that they should experience a bubble show in a VR HMD (Head Mounted Display) environment.


2021 ◽  
Author(s):  
Valentin Holzwarth ◽  
Johannes Schneider ◽  
Joshua Handali ◽  
Joy Gisler ◽  
Christian Hirt ◽  
...  

AbstractInferring users’ perceptions of Virtual Environments (VEs) is essential for Virtual Reality (VR) research. Traditionally, this is achieved through assessing users’ affective states before and after being exposed to a VE, based on standardized, self-assessment questionnaires. The main disadvantage of questionnaires is their sequential administration, i.e., a user’s affective state is measured asynchronously to its generation within the VE. A synchronous measurement of users’ affective states would be highly favorable, e.g., in the context of adaptive systems. Drawing from nonverbal behavior research, we argue that behavioral measures could be a powerful approach to assess users’ affective states in VR. In this paper, we contribute by providing methods and measures evaluated in a user study involving 42 participants to assess a users’ affective states by measuring head movements during VR exposure. We show that head yaw significantly correlates with presence, mental and physical demand, perceived performance, and system usability. We also exploit the identified relationships for two practical tasks that are based on head yaw: (1) predicting a user’s affective state, and (2) detecting manipulated questionnaire answers, i.e., answers that are possibly non-truthful. We found that affective states can be predicted significantly better than a naive estimate for mental demand, physical demand, perceived performance, and usability. Further, manipulated or non-truthful answers can also be estimated significantly better than by a naive approach. These findings mark an initial step in the development of novel methods to assess user perception of VEs.


2021 ◽  
pp. 104687812110082
Author(s):  
Omamah Almousa ◽  
Ruby Zhang ◽  
Meghan Dimma ◽  
Jieming Yao ◽  
Arden Allen ◽  
...  

Objective. Although simulation-based medical education is fundamental for acquisition and maintenance of knowledge and skills; simulators are often located in urban centers and they are not easily accessible due to cost, time, and geographic constraints. Our objective is to develop a proof-of-concept innovative prototype using virtual reality (VR) technology for clinical tele simulation training to facilitate access and global academic collaborations. Methodology. Our project is a VR-based system using Oculus Quest as a standalone, portable, and wireless head-mounted device, along with a digital platform to deliver immersive clinical simulation sessions. Instructor’s control panel (ICP) application is designed to create VR-clinical scenarios remotely, live-stream sessions, communicate with learners and control VR-clinical training in real-time. Results. The Virtual Clinical Simulation (VCS) system offers realistic clinical training in virtual space that mimics hospital environments. Those VR clinical scenarios are customizable to suit the need, with high-fidelity lifelike characters designed to deliver interactive and immersive learning experience. The real-time connection and live-stream between ICP and VR-training system enables interactive academic learning and facilitates access to tele simulation training. Conclusions. VCS system provides innovative solutions to major challenges associated with conventional simulation training such as access, cost, personnel, and curriculum. VCS facilitates the delivery of academic and interactive clinical training that is similar to real-life settings. Tele-clinical simulation systems like VCS facilitate necessary academic-community partnerships, as well as global education network between resource-rich and low-income countries.


2021 ◽  
Vol 5 (4) ◽  
pp. 15
Author(s):  
Jingyi Li ◽  
Ceenu George ◽  
Andrea Ngao ◽  
Kai Holländer ◽  
Stefan Mayer ◽  
...  

Ubiquitous technology lets us work in flexible and decentralised ways. Passengers can already use travel time to be productive, and we envision even better performance and experience in vehicles with emerging technologies, such as virtual reality (VR) headsets. However, the confined physical space constrains interactions while the virtual space may be conceptually borderless. We therefore conducted a VR study (N = 33) to examine the influence of physical restraints and virtual working environments on performance, presence, and the feeling of safety. Our findings show that virtual borders make passengers touch the car interior less, while performance and presence are comparable across conditions. Although passengers prefer a secluded and unlimited virtual environment (nature), they are more productive in a shared and limited one (office). We further discuss choices for virtual borders and environments, social experience, and safety responsiveness. Our work highlights opportunities and challenges for future research and design of rear-seat VR interaction.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mario Zanfardino ◽  
Rossana Castaldo ◽  
Katia Pane ◽  
Ornella Affinito ◽  
Marco Aiello ◽  
...  

AbstractAnalysis of large-scale omics data along with biomedical images has gaining a huge interest in predicting phenotypic conditions towards personalized medicine. Multiple layers of investigations such as genomics, transcriptomics and proteomics, have led to high dimensionality and heterogeneity of data. Multi-omics data integration can provide meaningful contribution to early diagnosis and an accurate estimate of prognosis and treatment in cancer. Some multi-layer data structures have been developed to integrate multi-omics biological information, but none of these has been developed and evaluated to include radiomic data. We proposed to use MultiAssayExperiment (MAE) as an integrated data structure to combine multi-omics data facilitating the exploration of heterogeneous data. We improved the usability of the MAE, developing a Multi-omics Statistical Approaches (MuSA) tool that uses a Shiny graphical user interface, able to simplify the management and the analysis of radiogenomic datasets. The capabilities of MuSA were shown using public breast cancer datasets from TCGA-TCIA databases. MuSA architecture is modular and can be divided in Pre-processing and Downstream analysis. The pre-processing section allows data filtering and normalization. The downstream analysis section contains modules for data science such as correlation, clustering (i.e., heatmap) and feature selection methods. The results are dynamically shown in MuSA. MuSA tool provides an easy-to-use way to create, manage and analyze radiogenomic data. The application is specifically designed to guide no-programmer researchers through different computational steps. Integration analysis is implemented in a modular structure, making MuSA an easily expansible open-source software.


Sign in / Sign up

Export Citation Format

Share Document