scholarly journals Distance Perception in the Oculus Quest and Oculus Quest 2

2022 ◽  
Author(s):  
Jonathan Kelly ◽  
Taylor Doty ◽  
Morgan Ambourn ◽  
Lucia Cherep

Distances in virtual environments (VEs) viewed on a head-mounted display (HMD) are typically underperceived relative to the intended distance. This paper presents an experiment comparing perceived egocentric distance in a real environment with that in a matched VE presented in the Oculus Quest and Oculus Quest 2. Participants made verbal judgments and blind walking judgments to an object on the ground. Both the Quest and Quest 2 produced underperception compared to the real environment. Verbal judgments in the VE were 86\% and 79\% of real world judgments in the Quest and Quest 2, respectively. Blind walking judgments were 78% and 79% of real world judgments in the Quest and Quest 2, respectively. This project shows that significant underperception of distance persists even in modern HMDs.

Perception ◽  
2020 ◽  
Vol 49 (9) ◽  
pp. 940-967
Author(s):  
Ilja T. Feldstein ◽  
Felix M. Kölsch ◽  
Robert Konrad

Virtual reality systems are a popular tool in behavioral sciences. The participants’ behavior is, however, a response to cognitively processed stimuli. Consequently, researchers must ensure that virtually perceived stimuli resemble those present in the real world to ensure the ecological validity of collected findings. Our article provides a literature review relating to distance perception in virtual reality. Furthermore, we present a new study that compares verbal distance estimates within real and virtual environments. The virtual space—a replica of a real outdoor area—was displayed using a state-of-the-art head-mounted display. Investigated distances ranged from 8 to 13 m. Overall, the results show no significant difference between egocentric distance estimates in real and virtual environments. However, a more in-depth analysis suggests that the order in which participants were exposed to the two environments may affect the outcome. Furthermore, the study suggests that a rising experience of immersion leads to an alignment of the estimated virtual distances with the real ones. The results also show that the discrepancy between estimates of real and virtual distances increases with the incongruity between virtual and actual eye heights, demonstrating the importance of an accurately set virtual eye height.


Author(s):  
Donald R. Lampton ◽  
Daniel P. McDonald ◽  
Michael Singer ◽  
James P. Bliss

This paper describes an experiment to evaluate a procedure for measuring distance perception in immersive VEs. Forty-eight subjects viewed a VE with a Head Mounted Display (HMD), a Binocular Omni-Oriented Monitor (BOOM), or a computer monitor. Subjects estimated the distance to a figure of known height that was initially 40 ft away. As the figure moved forward, subjects indicated when the figure was perceived to be 30, 20, 10, 5, and 2.5 ft away. A separate group of 36 subjects performed the task in a real-world setting roughly comparable to the VE. VE distance estimation was highly variable across subjects. For distance perception involving a moving figure, in the VE conditions most subjects called out before the figure had closed to the specified distances. Distance estimation was least accurate with the monitor. In the real world, most subjects called out after the figure had closed to or passed the specified distances. Ways to improve the procedure are discussed.


2019 ◽  
Vol 9 (9) ◽  
pp. 1797
Author(s):  
Chen ◽  
Lin

Augmented reality (AR) is an emerging technology that allows users to interact with simulated environments, including those emulating scenes in the real world. Most current AR technologies involve the placement of virtual objects within these scenes. However, difficulties in modeling real-world objects greatly limit the scope of the simulation, and thus the depth of the user experience. In this study, we developed a process by which to realize virtual environments that are based entirely on scenes in the real world. In modeling the real world, the proposed scheme divides scenes into discrete objects, which are then replaced with virtual objects. This enables users to interact in and with virtual environments without limitations. An RGB-D camera is used in conjunction with simultaneous localization and mapping (SLAM) to obtain the movement trajectory of the user and derive information related to the real environment. In modeling the environment, graph-based segmentation is used to segment point clouds and perform object segmentation to enable the subsequent replacement of objects with equivalent virtual entities. Superquadrics are used to derive shape parameters and location information from the segmentation results in order to ensure that the scale of the virtual objects matches the original objects in the real world. Only after the objects have been replaced with their virtual counterparts in the real environment converted into a virtual scene. Experiments involving the emulation of real-world locations demonstrated the feasibility of the proposed rendering scheme. A rock-climbing application scenario is finally presented to illustrate the potential use of the proposed system in AR applications.


2002 ◽  
Vol 11 (1) ◽  
pp. 19-32 ◽  
Author(s):  
William B. Lathrop ◽  
Mary K. Kaiser

Two experiments examined perceived spatial orientation in a small environment as a function of experiencing that environment under three conditions: real-world, desktop-display (DD), and head-mounted display (HMD). Across the three conditions, participants acquired two targets located on a perimeter surrounding them, and attempted to remember the relative locations of the targets. Subsequently, participants were tested on how accurately and consistently they could point in the remembered direction of a previously seen target. Results showed that participants were significantly more consistent in the real-world and HMD conditions than in the DD condition. Further, it is shown that the advantages observed in the HMD and real-world conditions were not simply due to nonspatial response strategies. These results suggest that the additional idiothetic information afforded in the realworld and HMD conditions is useful for orientation purposes in our presented task domain. Our results are relevant to interface design issues concerning tasks that require spatial search, navigation, and visualization.


2004 ◽  
Vol 4 (2) ◽  
pp. 109-113 ◽  
Author(s):  
Thomas Reuding ◽  
Pamela Meil

The predictive value and the reliability of evaluations made in immersive projection environments are limited when compared to the real world. As in other applications of numerical simulations, the acceptance of such techniques does not only depend on the stability of the methods, but also on the quality and credibility of the results obtained. In this paper, we investigate the predictive value of virtual reality and virtual environments when used for engineering assessment tasks. We examine the ergonomics evaluation of a vehicle interior, which is a complex activity relying heavily on know-how gained from personal experience, and compare performance in a VE with performance in the real world. If one assumes that within complex engineering processes certain types of work will be performed by more or less the same personnel, one can infer that a fairly consistent base of experience-based knowledge exists. Under such premises and if evaluations are conducted as comparisons within the VE, we believe that the reliability of the assessments is suitable for conceptual design work. Despite a number of unanswered questions at this time we believe this study leads to a better understanding of what determines the reliability of results obtained in virtual environments, thus making it useful for optimizing virtual prototyping processes and better utilization of the potential of VR and VEs in company work processes.


2020 ◽  
Vol 33 (4-5) ◽  
pp. 479-503 ◽  
Author(s):  
Lukas Hejtmanek ◽  
Michael Starrett ◽  
Emilio Ferrer ◽  
Arne D. Ekstrom

Abstract Past studies suggest that learning a spatial environment by navigating on a desktop computer can lead to significant acquisition of spatial knowledge, although typically less than navigating in the real world. Exactly how this might differ when learning in immersive virtual interfaces that offer a rich set of multisensory cues remains to be fully explored. In this study, participants learned a campus building environment by navigating (1) the real-world version, (2) an immersive version involving an omnidirectional treadmill and head-mounted display, or (3) a version navigated on a desktop computer with a mouse and a keyboard. Participants first navigated the building in one of the three different interfaces and, afterward, navigated the real-world building to assess information transfer. To determine how well they learned the spatial layout, we measured path length, visitation errors, and pointing errors. Both virtual conditions resulted in significant learning and transfer to the real world, suggesting their efficacy in mimicking some aspects of real-world navigation. Overall, real-world navigation outperformed both immersive and desktop navigation, effects particularly pronounced early in learning. This was also suggested in a second experiment involving transfer from the real world to immersive virtual reality (VR). Analysis of effect sizes of going from virtual conditions to the real world suggested a slight advantage for immersive VR compared to desktop in terms of transfer, although at the cost of increased likelihood of dropout. Our findings suggest that virtual navigation results in significant learning, regardless of the interface, with immersive VR providing some advantage when transferring to the real world.


2020 ◽  
Vol 88 ◽  
pp. 103145 ◽  
Author(s):  
Susanna Aromaa ◽  
Antti Väätänen ◽  
Iina Aaltonen ◽  
Vladimir Goriachev ◽  
Kaj Helin ◽  
...  

2019 ◽  
Vol 27 (1) ◽  
pp. 88-100 ◽  
Author(s):  
Rafa Rahman ◽  
Matthew E. Wood ◽  
Long Qian ◽  
Carrie L. Price ◽  
Alex A. Johnson ◽  
...  

Purpose. We analyzed the literature to determine (1) the surgically relevant applications for which head-mounted display (HMD) use is reported; (2) the types of HMD most commonly reported; and (3) the surgical specialties in which HMD use is reported. Methods. The PubMed, Embase, Cochrane Library, and Web of Science databases were searched through August 27, 2017, for publications describing HMD use during surgically relevant applications. We identified 120 relevant English-language, non-opinion publications for inclusion. HMD types were categorized as “heads-up” (nontransparent HMD display and direct visualization of the real environment), “see-through” (visualization of the HMD display overlaid on the real environment), or “non–see-through” (visualization of only the nontransparent HMD display). Results. HMDs were used for image guidance and augmented reality (70 publications), data display (63 publications), communication (34 publications), and education/training (18 publications). See-through HMDs were described in 55 publications, heads-up HMDs in 41 publications, and non–see-through HMDs in 27 publications. Google Glass, a see-through HMD, was the most frequently used model, reported in 32 publications. The specialties with the highest frequency of published HMD use were urology (20 publications), neurosurgery (17 publications), and unspecified surgical specialty (20 publications). Conclusion. Image guidance and augmented reality were the most commonly reported applications for which HMDs were used. See-through HMDs were the most commonly reported type used in surgically relevant applications. Urology and neurosurgery were the specialties with greatest published HMD use.


2003 ◽  
Vol 36 (12) ◽  
pp. 105-110
Author(s):  
Omar A.A. Orqueda ◽  
José Figueroa ◽  
Osvaldo E. Agamennoni

2012 ◽  
Vol 9 (4) ◽  
pp. 1-17 ◽  
Author(s):  
Marc Rébillat ◽  
Xavier Boutillon ◽  
Étienne Corteel ◽  
Brian F. G. Katz

Sign in / Sign up

Export Citation Format

Share Document