Amplifying Head Movements with Head-Mounted Displays

2003 ◽  
Vol 12 (3) ◽  
pp. 268-276 ◽  
Author(s):  
Caroline Jay ◽  
Roger Hubbold

The head-mounted display (HMD) is a popular form of virtual display due to its ability to immerse users visually in virtual environments (VEs). Unfortunately, the user's virtual experience is compromised by the narrow field of view (FOV) it affords, which is less than half that of normal human vision. This paper explores a solution to some of the problems caused by the narrow FOV by amplifying the head movement made by the user when wearing an HMD, so that the view direction changes by a greater amount in the virtual world than it does in the real world. Tests conducted on the technique show a significant improvement in performance on a visual search task, and questionnaire data indicate that the altered visual parameters that the user receives may be preferable to those in the baseline condition in which amplification of movement was not implemented. The tests also show that the user cannot interact normally with the VE if corresponding body movements are not amplified to the same degree as head movements, which may limit the implementation's versatility. Although not suitable for every application, the technique shows promise, and alterations to aspects of the implementation could extend its use in the future.

2008 ◽  
Vol 17 (1) ◽  
pp. 91-101 ◽  
Author(s):  
Peter Willemsen ◽  
Amy A. Gooch ◽  
William B. Thompson ◽  
Sarah H. Creem-Regehr

Several studies from different research groups investigating perception of absolute, egocentric distances in virtual environments have reported a compression of the intended size of the virtual space. One potential explanation for the compression is that inaccuracies and cue conflicts involving stereo viewing conditions in head mounted displays result in an inaccurate absolute scaling of the virtual world. We manipulate stereo viewing conditions in a head mounted display and show the effects of using both measured and fixed inter-pupilary distances, as well as bi-ocular and monocular viewing of graphics, on absolute distance judgments. Our results indicate that the amount of compression of distance judgments is unaffected by these manipulations. The equivalent performance with stereo, bi-ocular, and monocular viewing suggests that the limitations on the presentation of stereo imagery that are inherent in head mounted displays are likely not the source of distance compression reported in previous virtual environment studies.


1986 ◽  
Vol 55 (4) ◽  
pp. 696-714 ◽  
Author(s):  
J. van der Steen ◽  
I. S. Russell ◽  
G. O. James

We studied the effects of unilateral frontal eye-field (FEF) lesions on eye-head coordination in monkeys that were trained to perform a visual search task. Eye and head movements were recorded with the scleral search coil technique using phase angle detection in a homogeneous electromagnetic field. In the visual search task all three animals showed a neglect for stimuli presented in the field contralateral to the lesion. In two animals the neglect disappeared within 2-3 wk. One animal had a lasting deficit. We found that FEF lesions that are restricted to area 8 cause only temporary deficits in eye and head movements. Up to a week after the lesion the animals had a strong preference to direct gaze and head to the side ipsilateral to the lesion. Animals tracked objects in contralateral space with combined eye and head movements, but failed to do this with the eyes alone. It was found that within a few days after the lesion, eye and head movements in the direction of the target were initiated, but they were inadequate and had long latencies. Within 1 wk latencies had regained preoperative values. Parallel with the recovery on the behavioral task, head movements became more prominent than before the lesion. Four weeks after the lesion, peak velocity of the head movement had increased by a factor of two, whereas the duration showed a twofold decrease compared with head movements before the lesion. No effects were seen on the duration and peak velocity of gaze. After the recovery on the behavioral task had stabilized, a relative neglect in the hemifield contralateral to the lesion could still be demonstrated by simultaneously presenting two stimuli in the left and right visual hemifields. The neglect is not due to a sensory deficit, but to a disorder of programming. The recovery from unilateral neglect after a FEF lesion is the result of a different orienting behavior, in which head movements become more important. It is concluded that the FEF plays an important role in the organization and coordination of eye and head movements and that lesions of this area result in subtle but permanent changes in eye-head coordination.


2021 ◽  
Author(s):  
Yuki Harada ◽  
Junji Ohyama

AbstractA head-mounted display cannot cover an angle of visual field as wide as that of natural view (out-of-view problem). To enhance the visual cognition of an immersive environment, previous studies have developed various guidance designs that visualize the location or direction of items presented in the users’ surroundings. However, two issues regarding the guidance effects remain unresolved: How are the guidance effects different with each guided direction? How much is the cognitive load required by the guidance? To investigate the two issues, we performed a visual search task in an immersive environment and measured the search time of a target and time spent to recognize a guidance design. In this task, participants searched for a target presented on a head-mounted display and reported the target color while using a guidance design. The guidance designs (a moving window, 3D arrow, radiation, spherical gradation, and 3D radar) and target directions were manipulated. The search times showed an interaction effect between guidance designs and guided directions, e.g., the 3D arrow and radar shorten the search time for targets presented at the back of users. The recognition times showed that the participants required short times to recognize the details of the moving window and radiation but long times for the 3D arrow, spherical gradation, and 3D radar. These results suggest that the moving window and radiation are effective with respect to cognitive load, but the 3D arrow and radar are effective for guiding users’ attention to necessary items presented at the out-of-view.


2021 ◽  
Vol 12 ◽  
Author(s):  
Chloe Callahan-Flintoft ◽  
Christian Barentine ◽  
Jonathan Touryan ◽  
Anthony J. Ries

Using head mounted displays (HMDs) in conjunction with virtual reality (VR), vision researchers are able to capture more naturalistic vision in an experimentally controlled setting. Namely, eye movements can be accurately tracked as they occur in concert with head movements as subjects navigate virtual environments. A benefit of this approach is that, unlike other mobile eye tracking (ET) set-ups in unconstrained settings, the experimenter has precise control over the location and timing of stimulus presentation, making it easier to compare findings between HMD studies and those that use monitor displays, which account for the bulk of previous work in eye movement research and vision sciences more generally. Here, a visual discrimination paradigm is presented as a proof of concept to demonstrate the applicability of collecting eye and head tracking data from an HMD in VR for vision research. The current work’s contribution is 3-fold: firstly, results demonstrating both the strengths and the weaknesses of recording and classifying eye and head tracking data in VR, secondly, a highly flexible graphical user interface (GUI) used to generate the current experiment, is offered to lower the software development start-up cost of future researchers transitioning to a VR space, and finally, the dataset analyzed here of behavioral, eye and head tracking data synchronized with environmental variables from a task specifically designed to elicit a variety of eye and head movements could be an asset in testing future eye movement classification algorithms.


2004 ◽  
Vol 13 (5) ◽  
pp. 572-577 ◽  
Author(s):  
Joshua M. Knapp ◽  
Jack M. Loomis

Observers binocularly viewed a target placed in a large open field under two viewing conditions: unrestricted field of view and reduced field of view, as effected using a simulated head-mounted display. Observers indicated the perceived distance of the target, which ranged from 2 to 15 m, using both verbal report and blind walking. For neither response was there a reliable effect of limiting the field of view on the perception of distance. This result indicates that the significant underperception of distance observed in several studies on distance perception in virtual environments is not caused by the limited field of view of the head-mounted display.


Author(s):  
Craig Reynolds ◽  
Louis J. Everett ◽  
Richard A. Volz

Abstract Head mounted displays (HMDs) are common hardware in a virtual reality environment. Sensors are commonly installed on the helmet to provide information about where the operator is looking. If the operator rotates the helmet, the virtual scene should rotate correspondingly. If the point of rotation (POR) of the operator’s head differs from the POR used to transform the image, the display may behave in bizarre ways. In “enhanced reality” and similar applications the virtual scene must also accurately correspond to the real world. When the virtual scene must accurately correspond to a real scene, it becomes necessary to accurately match the virtual display to the real world. A method for automatically collecting data to align these scenes is described and demonstrated in this paper. A theoretical basis for the method and experimental data are presented. Results indicate that the underlying assumptions of the theory are reasonable and that the errors in the method are reasonable.


1999 ◽  
Vol 8 (4) ◽  
pp. 469-473 ◽  
Author(s):  
Jeffrey S. Pierce ◽  
Randy Pausch ◽  
Christopher B. Sturgill ◽  
Kevin D. Christiansen

For entertainment applications, a successful virtual experience based on a head-mounted display (HMD) needs to overcome some or all of the following problems: entering a virtual world is a jarring experience, people do not naturally turn their heads or talk to each other while wearing an HMD, putting on the equipment is hard, and people do not realize when the experience is over. In the Electric Garden at SIGGRAPH 97, we presented the Mad Hatter's Tea Party, a shared virtual environment experienced by more than 1,500 SIGGRAPH attendees. We addressed these HMD-related problems with a combination of back story, see-through HMDs, virtual characters, continuity of real and virtual objects, and the layout of the physical and virtual environments.


2021 ◽  
Vol 11 (2) ◽  
pp. 495
Author(s):  
Yuto Kimura ◽  
Asako Kimura ◽  
Fumihisa Shibata

In this study, we propose two methods for representing virtual transparent objects convincingly on an optical see-through head-mounted display without the use of an attenuation function or shielding environmental light. The first method represents the shadows and caustics of virtual transparent objects as illusionary images. Using this illusion-based approach, shadows can be represented without blocking the luminance produced by the real environment, and caustics are represented by adding the luminance of the environment to the produced shadow. In the second method, the visual effects that occur in each individual image of a transparent object are represented as surface, refraction, and reflection images by considering human binocular movement. The visual effects produced by this method reflect the disparities among the vergence and defocus of accommodation associated with the respective images. When reproducing the disparity, each parallax image is calculated in real time using a polygon-based method, whereas when reproducing the defocus, image processing is applied to blur each image and consider the user’s gaze image. To validate these approaches, we conducted experiments to evaluate the realism of the virtual transparent objects produced by each method. The results revealed that both methods produced virtual transparent objects with improved realism.


2018 ◽  
Vol 23 (1) ◽  
pp. 1-15 ◽  
Author(s):  
Eric Krokos ◽  
Catherine Plaisant ◽  
Amitabh Varshney

Abstract Virtual reality displays, such as head-mounted displays (HMD), afford us a superior spatial awareness by leveraging our vestibular and proprioceptive senses, as compared to traditional desktop displays. Since classical times, people have used memory palaces as a spatial mnemonic to help remember information by organizing it spatially and associating it with salient features in that environment. In this paper, we explore whether using virtual memory palaces in a head-mounted display with head-tracking (HMD condition) would allow a user to better recall information than when using a traditional desktop display with a mouse-based interaction (desktop condition). We found that virtual memory palaces in HMD condition provide a superior memory recall ability compared to the desktop condition. We believe this is a first step in using virtual environments for creating more memorable experiences that enhance productivity through better recall of large amounts of information organized using the idea of virtual memory palaces.


Sign in / Sign up

Export Citation Format

Share Document