Limited Field of View of Head-Mounted Displays Is Not the Cause of Distance Underestimation in Virtual Environments

2004 ◽  
Vol 13 (5) ◽  
pp. 572-577 ◽  
Author(s):  
Joshua M. Knapp ◽  
Jack M. Loomis

Observers binocularly viewed a target placed in a large open field under two viewing conditions: unrestricted field of view and reduced field of view, as effected using a simulated head-mounted display. Observers indicated the perceived distance of the target, which ranged from 2 to 15 m, using both verbal report and blind walking. For neither response was there a reliable effect of limiting the field of view on the perception of distance. This result indicates that the significant underperception of distance observed in several studies on distance perception in virtual environments is not caused by the limited field of view of the head-mounted display.

2008 ◽  
Vol 17 (1) ◽  
pp. 91-101 ◽  
Author(s):  
Peter Willemsen ◽  
Amy A. Gooch ◽  
William B. Thompson ◽  
Sarah H. Creem-Regehr

Several studies from different research groups investigating perception of absolute, egocentric distances in virtual environments have reported a compression of the intended size of the virtual space. One potential explanation for the compression is that inaccuracies and cue conflicts involving stereo viewing conditions in head mounted displays result in an inaccurate absolute scaling of the virtual world. We manipulate stereo viewing conditions in a head mounted display and show the effects of using both measured and fixed inter-pupilary distances, as well as bi-ocular and monocular viewing of graphics, on absolute distance judgments. Our results indicate that the amount of compression of distance judgments is unaffected by these manipulations. The equivalent performance with stereo, bi-ocular, and monocular viewing suggests that the limitations on the presentation of stereo imagery that are inherent in head mounted displays are likely not the source of distance compression reported in previous virtual environment studies.


2010 ◽  
Vol 19 (6) ◽  
pp. 527-543 ◽  
Author(s):  
Eric D. Ragan

Researchers have proposed that immersion could have advantages for tasks involving abstract mental activities, such as conceptual learning; however, there are few empirical results that support this idea. We hypothesized that higher levels of immersion would benefit such tasks if the mental activity could be mapped to objects or locations in a 3D environment. To investigate this hypothesis, we performed an experiment in which participants memorized procedures in a virtual environment and then attempted to recall those procedures. We aimed to understand the effects of three components of immersion on performance. The results demonstrate that a matched software field of view (SFOV), a higher physical field of view (FOV), and a higher field of regard (FOR) all contributed to more effective memorization. The best performance was achieved with a matched SFOV and either a high FOV or a high FOR, or both. In addition, our experiment demonstrated that memorization in a virtual environment could be transferred to the real world. The results suggest that, for procedure memorization tasks, increasing the level of immersion even to moderate levels, such as those found in head mounted displays (HMDs) and display walls, can improve performance significantly compared to lower levels of immersion. Hypothesizing that the performance improvements provided by higher levels of immersion can be attributed to enhanced spatial cues, we discuss the values and limitations of supplementing conceptual information with spatial information in educational VR.


Perception ◽  
2020 ◽  
Vol 49 (9) ◽  
pp. 940-967
Author(s):  
Ilja T. Feldstein ◽  
Felix M. Kölsch ◽  
Robert Konrad

Virtual reality systems are a popular tool in behavioral sciences. The participants’ behavior is, however, a response to cognitively processed stimuli. Consequently, researchers must ensure that virtually perceived stimuli resemble those present in the real world to ensure the ecological validity of collected findings. Our article provides a literature review relating to distance perception in virtual reality. Furthermore, we present a new study that compares verbal distance estimates within real and virtual environments. The virtual space—a replica of a real outdoor area—was displayed using a state-of-the-art head-mounted display. Investigated distances ranged from 8 to 13 m. Overall, the results show no significant difference between egocentric distance estimates in real and virtual environments. However, a more in-depth analysis suggests that the order in which participants were exposed to the two environments may affect the outcome. Furthermore, the study suggests that a rising experience of immersion leads to an alignment of the estimated virtual distances with the real ones. The results also show that the discrepancy between estimates of real and virtual distances increases with the incongruity between virtual and actual eye heights, demonstrating the importance of an accurately set virtual eye height.


2003 ◽  
Vol 12 (3) ◽  
pp. 268-276 ◽  
Author(s):  
Caroline Jay ◽  
Roger Hubbold

The head-mounted display (HMD) is a popular form of virtual display due to its ability to immerse users visually in virtual environments (VEs). Unfortunately, the user's virtual experience is compromised by the narrow field of view (FOV) it affords, which is less than half that of normal human vision. This paper explores a solution to some of the problems caused by the narrow FOV by amplifying the head movement made by the user when wearing an HMD, so that the view direction changes by a greater amount in the virtual world than it does in the real world. Tests conducted on the technique show a significant improvement in performance on a visual search task, and questionnaire data indicate that the altered visual parameters that the user receives may be preferable to those in the baseline condition in which amplification of movement was not implemented. The tests also show that the user cannot interact normally with the VE if corresponding body movements are not amplified to the same degree as head movements, which may limit the implementation's versatility. Although not suitable for every application, the technique shows promise, and alterations to aspects of the implementation could extend its use in the future.


2022 ◽  
Author(s):  
Jonathan Kelly ◽  
Taylor Doty ◽  
Morgan Ambourn ◽  
Lucia Cherep

Distances in virtual environments (VEs) viewed on a head-mounted display (HMD) are typically underperceived relative to the intended distance. This paper presents an experiment comparing perceived egocentric distance in a real environment with that in a matched VE presented in the Oculus Quest and Oculus Quest 2. Participants made verbal judgments and blind walking judgments to an object on the ground. Both the Quest and Quest 2 produced underperception compared to the real environment. Verbal judgments in the VE were 86\% and 79\% of real world judgments in the Quest and Quest 2, respectively. Blind walking judgments were 78% and 79% of real world judgments in the Quest and Quest 2, respectively. This project shows that significant underperception of distance persists even in modern HMDs.


Author(s):  
Paul B. Kline ◽  
Bob G. Witmer

This study investigated the effects of three system related cues on estimates of near distances. Subjects (N=28) viewed a simple VE and used a magnitude estimation procedure to generate distance estimates to a wall at the end of a corridor 1 to 12 feet away. Independent variables included type of wall texture (2 levels), resolution of wall texture (3 levels), display FOV (2 levels), and distance (12 levels). Dependent variables included distance estimates, response latency, and relative error of estimates. Subjects consistently underestimated distances judged using a wide FOV and overestimated distances judged with a narrow FOV. Distance estimates were significantly affected by both FOV and texture type. Significant interactions of distance with FOV, texture type, and texture resolution revealed that these variables had greater effects at the closer distances. The most accurate estimates occurred with a wide FOV and a rich, fine resolution pattern.


1999 ◽  
Vol 8 (6) ◽  
pp. 657-670 ◽  
Author(s):  
David Waller

Two experiments collectively explored four factors that may influence people's judgments of exocentric (interobject) distances in virtual environments. Participants freely navigated in a simple virtual environment and repeatedly made magnitude estimations of exocentric distances. Distances were generally overestimated. An exponential model (Stevens' power law) fit the data, and exponent estimates were generally less than unity. Geometric field of view (GFOV) and the presence of error-corrective feedback were found to have the strongest effect on accuracy. In fact, distance perception was nearly veridical when made with an 80 deg. GFOV and when receiving feedback. Display type (head-mounted versus desktop) and the presence of additional perspective cues were less influential.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Narmin Abouelkhier ◽  
Doaa Shawky ◽  
Mohamed Marzouk

Purpose Immersive virtual environments (IVEs) aid in perceiving spaces by providing a platform for all stakeholders to make better decisions at early design stages. Nevertheless, they are not widely used in architecture, engineering and construction (AEC) industry. This paper aims to illustrate the impact of level of details (LODs) in participants’ perception for architecture design alternatives in IVEs. Design/methodology/approach This paper presents an approach to estimate how distance perception varies between real and virtual environments when different design alternatives are implemented. First, a fully three-dimensional (3D) model for a replica meeting room was created and the level of details (LODs) inside the IVE was gradually modified. Second, a questionnaire was designed to collect responses about how the perceived experience of an IVE is compared to that of the physical environment, where the two environments have the same dimensions. Twenty-six participants were recruited in this study to estimate eight distances in the IVEs while putting on a head-mounted display. Findings Obtained results show that decreasing LOD has negative effect on users’ perception. Thus, when all of the available LODs were added to the IVE, the perceived perception was significantly enhanced. These findings emphasize the relation between the physical details and distance perception in IVEs and shed light on how to design virtual reality architectural models in an efficient manner. Originality/value Different experiments were conducted to analyze perception differences associated with factors such as LODs, gender and whether participants are wearing glasses.


i-com ◽  
2020 ◽  
Vol 19 (2) ◽  
pp. 87-101
Author(s):  
Robin Horst ◽  
Fabio Klonowski ◽  
Linda Rau ◽  
Ralf Dörner

AbstractAsymmetric Virtual Reality (VR) applications are a substantial subclass of multi-user VR that offers not all participants the same interaction possibilities with the virtual scene. While one user might be immersed using a VR head-mounted display (HMD), another user might experience the VR through a common desktop PC. In an educational scenario, for example, learners can use immersive VR technology to inform themselves at different exhibits within a virtual scene. Educators can use a desktop PC setup for following and guiding learners through virtual exhibits and still being able to pay attention to safety aspects in the real world (e. g., avoid learners bumping against a wall). In such scenarios, educators must ensure that learners have explored the entire scene and have been informed about all virtual exhibits in it. According visualization techniques can support educators and facilitate conducting such VR-enhanced lessons. One common technique is to render the view of the learners on the 2D screen available to the educators. We refer to this solution as the shared view paradigm. However, this straightforward visualization involves challenges. For example, educators have no control over the scene and the collaboration of the learning scenario can be tedious. In this paper, we differentiate between two classes of visualizations that can help educators in asymmetric VR setups. First, we investigate five techniques that visualize the view direction or field of view of users (view visualizations) within virtual environments. Second, we propose three techniques that can support educators to understand what parts of the scene learners already have explored (exploration visualization). In a user study, we show that our participants preferred a volume-based rendering and a view-in-view overlay solution for view visualizations. Furthermore, we show that our participants tended to use combinations of different view visualizations.


Sign in / Sign up

Export Citation Format

Share Document