The Shared View Paradigm in Asymmetric Virtual Reality Setups

i-com ◽  
2020 ◽  
Vol 19 (2) ◽  
pp. 87-101
Author(s):  
Robin Horst ◽  
Fabio Klonowski ◽  
Linda Rau ◽  
Ralf Dörner

AbstractAsymmetric Virtual Reality (VR) applications are a substantial subclass of multi-user VR that offers not all participants the same interaction possibilities with the virtual scene. While one user might be immersed using a VR head-mounted display (HMD), another user might experience the VR through a common desktop PC. In an educational scenario, for example, learners can use immersive VR technology to inform themselves at different exhibits within a virtual scene. Educators can use a desktop PC setup for following and guiding learners through virtual exhibits and still being able to pay attention to safety aspects in the real world (e. g., avoid learners bumping against a wall). In such scenarios, educators must ensure that learners have explored the entire scene and have been informed about all virtual exhibits in it. According visualization techniques can support educators and facilitate conducting such VR-enhanced lessons. One common technique is to render the view of the learners on the 2D screen available to the educators. We refer to this solution as the shared view paradigm. However, this straightforward visualization involves challenges. For example, educators have no control over the scene and the collaboration of the learning scenario can be tedious. In this paper, we differentiate between two classes of visualizations that can help educators in asymmetric VR setups. First, we investigate five techniques that visualize the view direction or field of view of users (view visualizations) within virtual environments. Second, we propose three techniques that can support educators to understand what parts of the scene learners already have explored (exploration visualization). In a user study, we show that our participants preferred a volume-based rendering and a view-in-view overlay solution for view visualizations. Furthermore, we show that our participants tended to use combinations of different view visualizations.

2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Raquel Gil Rodríguez ◽  
Florian Bayer ◽  
Matteo Toscani ◽  
Dar’ya Guarnera ◽  
Giuseppe Claudio Guarnera ◽  
...  

AbstractVirtual reality (VR) technology offers vision researchers the opportunity to conduct immersive studies in simulated real-world scenes. However, an accurate colour calibration of the VR head mounted display (HMD), both in terms of luminance and chromaticity, is required to precisely control the presented stimuli. Such a calibration presents significant new challenges, for example, due to the large field of view of the HMD, or the software implementation used for scene rendering, which might alter the colour appearance of objects. Here, we propose a framework for calibrating an HMD using an imaging colorimeter, the I29 (Radiant Vision Systems, Redmond, WA, USA). We examine two scenarios, both with and without using a rendering software for visualisation. In addition, we present a colour constancy experiment design for VR through a gaming engine software, Unreal Engine 4. The colours of the objects of study are chosen according to the previously defined calibration. Results show a high-colour constancy performance among participants, in agreement with recent studies performed on real-world scenarios. Our studies show that our methodology allows us to control and measure the colours presented in the HMD, effectively enabling the use of VR technology for colour vision research.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 397
Author(s):  
Qimeng Zhang ◽  
Ji-Su Ban ◽  
Mingyu Kim ◽  
Hae Won Byun ◽  
Chang-Hun Kim

We propose a low-asymmetry interface to improve the presence of non-head-mounted-display (non-HMD) users in shared virtual reality (VR) experiences with HMD users. The low-asymmetry interface ensures that the HMD and non-HMD users’ perception of the VR environment is almost similar. That is, the point-of-view asymmetry and behavior asymmetry between HMD and non-HMD users are reduced. Our system comprises a portable mobile device as a visual display to provide a changing PoV for the non-HMD user and a walking simulator as an in-place walking detection sensor to enable the same level of realistic and unrestricted physical-walking-based locomotion for all users. Because this allows non-HMD users to experience the same level of visualization and free movement as HMD users, both of them can engage as the main actors in movement scenarios. Our user study revealed that the low-asymmetry interface enables non-HMD users to feel a presence similar to that of the HMD users when performing equivalent locomotion tasks in a virtual environment. Furthermore, our system can enable one HMD user and multiple non-HMD users to participate together in a virtual world; moreover, our experiments show that the non-HMD user satisfaction increases with the number of non-HMD participants owing to increased presence and enjoyment.


2019 ◽  
Vol 39 (6) ◽  
pp. 0612002 ◽  
Author(s):  
陆驰豪 Chihao Lu ◽  
李海峰 Haifeng Li ◽  
高涛 Tao Gao ◽  
徐良 Liang Xu ◽  
李海丽 Haili Li

2020 ◽  
Vol 33 (4-5) ◽  
pp. 479-503 ◽  
Author(s):  
Lukas Hejtmanek ◽  
Michael Starrett ◽  
Emilio Ferrer ◽  
Arne D. Ekstrom

Abstract Past studies suggest that learning a spatial environment by navigating on a desktop computer can lead to significant acquisition of spatial knowledge, although typically less than navigating in the real world. Exactly how this might differ when learning in immersive virtual interfaces that offer a rich set of multisensory cues remains to be fully explored. In this study, participants learned a campus building environment by navigating (1) the real-world version, (2) an immersive version involving an omnidirectional treadmill and head-mounted display, or (3) a version navigated on a desktop computer with a mouse and a keyboard. Participants first navigated the building in one of the three different interfaces and, afterward, navigated the real-world building to assess information transfer. To determine how well they learned the spatial layout, we measured path length, visitation errors, and pointing errors. Both virtual conditions resulted in significant learning and transfer to the real world, suggesting their efficacy in mimicking some aspects of real-world navigation. Overall, real-world navigation outperformed both immersive and desktop navigation, effects particularly pronounced early in learning. This was also suggested in a second experiment involving transfer from the real world to immersive virtual reality (VR). Analysis of effect sizes of going from virtual conditions to the real world suggested a slight advantage for immersive VR compared to desktop in terms of transfer, although at the cost of increased likelihood of dropout. Our findings suggest that virtual navigation results in significant learning, regardless of the interface, with immersive VR providing some advantage when transferring to the real world.


2008 ◽  
Vol 41 (1) ◽  
pp. 161-181 ◽  
Author(s):  
Beatriz Sousa Santos ◽  
Paulo Dias ◽  
Angela Pimentel ◽  
Jan-Willem Baggerman ◽  
Carlos Ferreira ◽  
...  

2005 ◽  
Vol 32 (5) ◽  
pp. 777-785 ◽  
Author(s):  
Ebru Cubukcu ◽  
Jack L Nasar

Discrepanices between perceived and actual distance may affect people's spatial behavior. In a previous study Nasar, using self report of behavior, found that segmentation (measured through the number of buildings) along the route affected choice of parking garage and path from the parking garage to a destination. We recreated that same environment in a three-dimensional virtual environment and conducted a test to see whether the same factors emerged under these more controlled conditions and to see whether spatial behavior in the virtual environment accurately reflected behavior in the real environment. The results confirmed similar patterns of response in the virtual and real environments. This supports the use of virtual reality as a tool for predicting behavior in the real world and confirms increases in segmentation as related to increases in perceived distance.


2011 ◽  
Vol 2 (2) ◽  
pp. 1
Author(s):  
Roy A Ruddle ◽  
David J Duke

Research by the Visualization & Virtual Reality Research Group (School of Computing, University of Leeds, UK) includes themes that focus on navigation, collaborative interaction, and gigapixel displays. The group also carries out research into visualization techniques and systems, including new systems technologies for visualization, and tools for investigating features within large datasets. This article summarizes that research and describes current projects that are taking place: Virtual trails to aid real-world navigation, Mobile geophysics, Communication breakdown in collaborative VR, Cancer diagnosis with a VR Microscope, Visual analytic interfaces for optimization, and Overlays for graph exploration.


2020 ◽  
Vol 10 (7) ◽  
pp. 2248
Author(s):  
Syed Hammad Hussain Shah ◽  
Kyungjin Han ◽  
Jong Weon Lee

We propose a novel authoring and viewing system for generating multiple experiences with a single 360° video and efficiently transferring these experiences to the user. An immersive video contains much more interesting information within the 360° environment than normal videos. There can be multiple interesting areas within a 360° frame at the same time. Due to the narrow field of view in virtual reality head-mounted displays, a user can only view a limited area of a 360° video. Hence, our system is aimed at generating multiple experiences based on interesting information in different regions of a 360° video and efficient transferring of these experiences to prospective users. The proposed system generates experiences by using two approaches: (1) Recording of the user’s experience when the user watches a panoramic video using a virtual reality head-mounted display, and (2) tracking of an arbitrary interesting object in a 360° video selected by the user. For tracking of an arbitrary interesting object, we have developed a pipeline around an existing simple object tracker to adapt it for 360° videos. This tracking algorithm was performed in real time on a CPU with high precision. Moreover, to the best of our knowledge, there is no such existing system that can generate a variety of different experiences from a single 360° video and enable the viewer to watch one 360° visual content from various interesting perspectives in immersive virtual reality. Furthermore, we have provided an adaptive focus assistance technique for efficient transferring of the generated experiences to other users in virtual reality. In this study, technical evaluation of the system along with a detailed user study has been performed to assess the system’s application. Findings from evaluation of the system showed that a single 360° multimedia content has the capability of generating multiple experiences and transfers among users. Moreover, sharing of the 360° experiences enabled viewers to watch multiple interesting contents with less effort.


Author(s):  
Hannah M. Solini ◽  
Ayush Bhargava ◽  
Christopher C. Pagano

It is often questioned whether task performance attained in a virtual environment can be transferred appropriately and accurately to the same task in the real world. With advancements in virtual reality (VR) technology, recent research has focused on individuals’ abilities to transfer calibration achieved in a virtual environment to a real-world environment. Little research, however, has shown whether transfer of calibration from a virtual environment to the real world is similar to transfer of calibration from a virtual environment to another virtual environment. As such, the present study investigated differences in calibration transfer to real-world and virtual environments. In either a real-world or virtual environment, participants completed blind walking estimates before and after experiencing perturbed virtual optic flow via a head-mounted virtual display (HMD). Results showed that individuals calibrated to perturbed virtual optic flow and that this calibration carried over to both real-world and virtual environments in a like manner.


Author(s):  
Simon Riches ◽  
Lisa Azevedo ◽  
Leanne Bird ◽  
Sara Pisani ◽  
Lucia Valmaggia

Abstract Purpose Relaxation has significant restorative properties and implications for public health. However, modern, busy lives leave limiting time for relaxation. Virtual reality (VR) experiences of pleasant and calming virtual environments, accessed with a head-mounted display (HMD), appear to promote relaxation. This study aimed to provide a systematic review of feasibility, acceptability, and effectiveness of studies that use VR to promote relaxation in the general population (PROSPERO 195,804). Methods Web of Science, PsycINFO, Embase, and MEDLINE were searched until 29th June 2020. Studies were included in the review if they used HMD technology to present virtual environments that aimed to promote or measure relaxation, or relaxation-related variables. The Effective Public Health Practice Project (EPHPP) quality assessment tool was used to assess methodological quality of studies. Results 6403 articles were identified through database searching. Nineteen studies published between 2007 and 2020, with 1278 participants, were included in the review. Of these, thirteen were controlled studies. Studies predominantly used natural audio-visual stimuli to promote relaxation. Findings indicate feasibility, acceptability, and short-term effectiveness of VR to increase relaxation and reduce stress. Six studies received an EPHPP rating of ‘strong’, seven were ‘moderate’, and six were ‘weak’. Conclusions VR may be a useful tool to promote relaxation in the general population, especially during the COVID-19 pandemic, when stress is increasing worldwide. However, methodological limitations, such as limited randomised controlled trials and longer-term evidence, mean that these conclusions should be drawn with caution. More robust studies are needed to support this promising area of VR relaxation.


Sign in / Sign up

Export Citation Format

Share Document