scholarly journals Visual Exploration of Omni-Directional Panoramic Scenes

2020 ◽  
Author(s):  
Walter F. Bischof ◽  
Nicola C Anderson ◽  
Michael T. Doswell ◽  
Alan Kingstone

How do we explore the visual environment around us, and how are head and eye movements coordinated during our exploration? To investigate this question, we had observers look at omni-directional panoramic scenes, composed of both landscape and fractal images, using a virtual-reality (VR) viewer while their eye and head movements were tracked. We analyzed the spatial distribution of eye fixations and the distribution of saccade directions; the spatial distribution of head positions and the distribution of head shifts; as well as the relation between eye and head movements. The results show that, for landscape scenes, eye and head behaviour best fit the allocentric frame defined by the scene horizon, especially when head tilt (i.e., head rotation around the view axis) is considered. For fractal scenes, which have an isotropic texture, eye and head movements were executed primarily along the cardinal directions in world coordinates. The results also show that eye and head movements are closely linked in space and time in a complementary way, with stimulus-driven eye movements predominantly leading the head movements. Our study is the first to systematically examine eye and head movements in a panoramic VRenvironment, and the results demonstrate that a VR environment constitutes a powerful and informative research alternative to traditional methods for investigating looking behaviour.

Author(s):  
Arne F. Meyer ◽  
John O’Keefe ◽  
Jasper Poort

SummaryAnimals actively interact with their environment to gather sensory information. There is conflicting evidence about how mice use vision to sample their environment. During head restraint, mice make rapid eye movements strongly coupled between the eyes, similar to conjugate saccadic eye movements in humans. However, when mice are free to move their heads, eye movement patterns are more complex and often non-conjugate, with the eyes moving in opposite directions. Here, we combined eye tracking with head motion measurements in freely moving mice and found that both observations can be explained by the existence of two distinct types of coupling between eye and head movements. The first type comprised non-conjugate eye movements which systematically compensated for changes in head tilt to maintain approximately the same visual field relative to the horizontal ground plane. The second type of eye movements were conjugate and coupled to head yaw rotation to produce a “saccade and fixate” gaze pattern. During head initiated saccades, the eyes moved together in the same direction as the head, but during subsequent fixation moved in the opposite direction to the head to compensate for head rotation. This “saccade and fixate” pattern is similar to that seen in humans who use eye movements (with or without head movement) to rapidly shift gaze but in mice relies on combined eye and head movements. Indeed, the two types of eye movements very rarely occurred in the absence of head movements. Even in head-restrained mice, eye movements were invariably associated with attempted head motion. Both types of eye-head coupling were seen in freely moving mice during social interactions and a visually-guided object tracking task. Our results reveal that mice use a combination of head and eye movements to sample their environment and highlight the similarities and differences between eye movements in mice and humans.HighlightsTracking of eyes and head in freely moving mice reveals two types of eye-head couplingEye/head tilt coupling aligns gaze to horizontal planeRotational eye and head coupling produces a “saccade and fixate” gaze pattern with head leading the eyeBoth types of eye-head coupling are maintained during visually-guided behaviorsEye movements in head-restrained mice are related to attempted head movements


2019 ◽  
Vol 12 (7) ◽  
Author(s):  
Nicola C. Anderson ◽  
Walter F. Bischof

Video stream: https://vimeo.com/356859979 Production and  publication of the video stream was sponsored by SCIANS Ltd  http://www.scians.ch/ We examined the extent to which image shape (square vs. circle), image rotation, and image content (landscapes vs. fractal images) influenced eye and head movements. Both the eyes and head were tracked while observers looked at natural scenes in a virtual reality (VR) environment. In line with previous work, we found a horizontal bias in saccade directions, but this was affected by both the image shape and its content. Interestingly, when viewing landscapes (but not fractals), observers rotated their head in line with the image rotation, presumably to make saccades in cardinal, rather than oblique, directions. We discuss our findings in relation to current theories on eye movement control, and how insights from VR might inform traditional eyetracking studies. - Part 2: Observers looked at panoramic, 360 degree scenes using VR goggles while eye and head movements were tracked. Fixations were determined using IDT (Salvucci & Goldberg, 2000) adapted to a spherical coordinate system. We then analyzed a) the spatial distribution of fixations and the distribution of saccade directions, b) the spatial distribution of head positions and the distribution of head movements, and c) the relation between gaze and head movements. We found that, for landscape scenes, gaze and head best fit the allocentric frame defined by the scene horizon, especially when taking head tilt (i.e., head rotation around the view axis) into account. For fractal scenes, which are isotropic on average, the bias toward a body-centric frame gaze is weak for gaze and strong for the head. Furthermore, our data show that eye and head movements are closely linked in space and time in stereotypical ways, with volitional eye movements predominantly leading the head. We discuss our results in terms of models of visual exploratory behavior in panoramic scenes, both in virtual and real environments.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245777
Author(s):  
Fanny Poncet ◽  
Robert Soussignan ◽  
Margaux Jaffiol ◽  
Baptiste Gaudelus ◽  
Arnaud Leleu ◽  
...  

Recognizing facial expressions of emotions is a fundamental ability for adaptation to the social environment. To date, it remains unclear whether the spatial distribution of eye movements predicts accurate recognition or, on the contrary, confusion in the recognition of facial emotions. In the present study, we asked participants to recognize facial emotions while monitoring their gaze behavior using eye-tracking technology. In Experiment 1a, 40 participants (20 women) performed a classic facial emotion recognition task with a 5-choice procedure (anger, disgust, fear, happiness, sadness). In Experiment 1b, a second group of 40 participants (20 women) was exposed to the same materials and procedure except that they were instructed to say whether (i.e., Yes/No response) the face expressed a specific emotion (e.g., anger), with the five emotion categories tested in distinct blocks. In Experiment 2, two groups of 32 participants performed the same task as in Experiment 1a while exposed to partial facial expressions composed of actions units (AUs) present or absent in some parts of the face (top, middle, or bottom). The coding of the AUs produced by the models showed complex facial configurations for most emotional expressions, with several AUs in common. Eye-tracking data indicated that relevant facial actions were actively gazed at by the decoders during both accurate recognition and errors. False recognition was mainly associated with the additional visual exploration of less relevant facial actions in regions containing ambiguous AUs or AUs relevant to other emotional expressions. Finally, the recognition of facial emotions from partial expressions showed that no single facial actions were necessary to effectively communicate an emotional state. In contrast, the recognition of facial emotions relied on the integration of a complex set of facial cues.


2001 ◽  
Vol 86 (4) ◽  
pp. 1546-1554 ◽  
Author(s):  
S. Glasauer ◽  
M. Dieterich ◽  
Th. Brandt

To find an explanation of the mechanisms of central positional nystagmus in neurological patients with posterior fossa lesions, we developed a three-dimensional (3-D) mathematical model to simulate head position-dependent changes in eye position control relative to gravity. This required a model implementation of saccadic burst generation, of the neural velocity to eye position integrator, which includes the experimentally demonstrated leakage in the torsional component, and of otolith-dependent neural control of Listing's plane. The validity of the model was first tested by simulating saccadic eye movements in different head positions. Then the model was used to simulate central positional nystagmus in off-vertical head positions. The model simulated lesions of assumed otolith inputs to the burst generator or the neural integrator, both of which resulted in different types of torsional-vertical nystagmus that only occurred during head tilt in roll plane. The model data qualitatively fit clinical observations of central positional nystagmus. Quantitative comparison with patient data were not possible, since no 3-D analyses of eye movements in various head positions have been reported in the literature on patients with positional nystagmus. The present model, prompted by an open clinical question, proposes a new hypothesis about the generation of pathological nystagmus and about neural control of Listing's plane.


2002 ◽  
Vol 87 (2) ◽  
pp. 912-924 ◽  
Author(s):  
H. Rambold ◽  
A. Churchland ◽  
Y. Selig ◽  
L. Jasmin ◽  
S. G. Lisberger

The vestibuloocular reflex (VOR) generates compensatory eye movements to stabilize visual images on the retina during head movements. The amplitude of the reflex is calibrated continuously throughout life and undergoes adaptation, also called motor learning, when head movements are persistently associated with image motion. Although the floccular-complex of the cerebellum is necessary for VOR adaptation, it is not known whether this function is localized in its anterior or posterior portions, which comprise the ventral paraflocculus and flocculus, respectively. The present paper reports the effects of partial lesions of the floccular-complex in five macaque monkeys, made either surgically or with stereotaxic injection of 3-nitropropionic acid (3-NP). Before and after the lesions, smooth pursuit eye movements were tested during sinusoidal and step-ramp target motion. Cancellation of the VOR was tested by moving a target exactly with the monkey during sinusoidal head rotation. The control VOR was tested during sinusoidal head rotation in the dark and during 30°/s pulses of head velocity. VOR adaptation was studied by having the monkeys wear ×2 or ×0.25 optics for 4–7 days. In two monkeys, bilateral lesions removed all of the flocculus except for parts of folia 1 and 2 but did not produce any deficits in smooth pursuit, VOR adaptation, or VOR cancellation. We conclude that the flocculus alone probably is not necessary for either pursuit or VOR learning. In two monkeys, unilateral lesions including a large fraction of the ventral paraflocculus produced small deficits in horizontal and vertical smooth pursuit, and mild impairments of VOR adaptation and VOR cancellation. We conclude that the ventral paraflocculus contributes to both behaviors. In one monkey, a bilateral lesion of the flocculus and ventral paraflocculus produced severe deficits smooth pursuit and VOR cancellation, and a complete loss of VOR adaptation. Considering all five cases together, there was a strong correlation between the size of the deficits in VOR learning and pursuit. We found the strongest correlation between the behavior deficits and the size of the lesion of the ventral paraflocculus, a weaker but significant correlation for the full floccular complex, and no correlation with the size of the lesion of the flocculus. We conclude that 1) lesions of the floccular complex cause linked deficits in smooth pursuit and VOR adaptation, and 2) the relevant portions of the structure are primarily in the ventral paraflocculus, although the flocculus may participate.


2020 ◽  
Vol 123 (1) ◽  
pp. 243-258 ◽  
Author(s):  
Kristin N. Hageman ◽  
Margaret R. Chow ◽  
Dale Roberts ◽  
Charles C. Della Santina

Head rotation, translation, and tilt with respect to a gravitational field elicit reflexive eye movements that partially stabilize images of Earth-fixed objects on the retinas of humans and other vertebrates. Compared with the angular vestibulo-ocular reflex, responses to translation and tilt, collectively called the otolith-ocular reflex (OOR), are less completely characterized, typically smaller, generally disconjugate (different for the 2 eyes) and more complicated in their relationship to the natural stimuli that elicit them. We measured binocular 3-dimensional OOR responses of 6 alert normal chinchillas in darkness during whole body tilts around 16 Earth-horizontal axes and translations along 21 axes in horizontal, coronal, and sagittal planes. Ocular countertilt responses to 40-s whole body tilts about Earth-horizontal axes grew linearly with head tilt amplitude, but responses were disconjugate, with each eye’s response greatest for whole body tilts about axes near the other eye’s resting line of sight. OOR response magnitude during 1-Hz sinusoidal whole body translations along Earth-horizontal axes also grew with stimulus amplitude. Translational OOR responses were similarly disconjugate, with each eye’s response greatest for whole body translations along its resting line of sight. Responses to Earth-horizontal translation were similar to those that would be expected for tilts that would cause a similar peak deviation of the gravitoinertial acceleration (GIA) vector with respect to the head, consistent with the “perceived tilt” model of the OOR. However, that model poorly fit responses to translations along non-Earth-horizontal axes and was insufficient to explain why responses are larger for the eye toward which the GIA vector deviates. NEW & NOTEWORTHY As the first in a pair of papers on Binocular 3D Otolith-Ocular Reflexes, this paper characterizes binocular 3D eye movements in normal chinchillas during tilts and translations. The eye movement responses were used to create a data set to fully define the normal otolith-ocular reflexes in chinchillas. This data set provides the foundation to use otolith-ocular reflexes to back-project direction and magnitude of eye movement to predict tilt axis as discussed in the companion paper.


1991 ◽  
Vol 1 (3) ◽  
pp. 263-277 ◽  
Author(s):  
J.L. Demer ◽  
J. Goldberg ◽  
F.I. Porter ◽  
H.A. Jenkins ◽  
K. Schmidt

Vestibularly and visually driven eye movements interact to compensate for head movements to maintain the necessary retinal image stability for clear vision. The wearing of highly magnifying telescopic spectacles requires that such compensatory visual-vestibular interaction operate in a quantitative regime much more demanding than that normally encountered. We employed electro-oculography to investigate the effect of wearing of 2×, 4×, and 6× binocular telescopic spectacles on visual-vestibular interactions during sinusoidal head rotation in 43 normal subjects. All telescopic spectacle powers produced a large, immediate increase in the gain (eye velocity/head velocity) of compensatory eye movements, called the visual-vestibulo-ocular reflex (VVOR). However, the amount of VVOR gain augmentation became limited as spectacle magnification and the amplitude of head velocity increased. Optokinetic responses during wearing of telescopic spectacles exhibited a similar nonlinearity with respect to stimulus amplitude and spectacle magnification. Computer simulation was used to demonstrate that the nonlinear response of the VVOR with telescopic spectacles is a result of nonlinearities in visually guided tracking movements. Immediate augmentation of VVOR gain by telescopic spectacles declined significantly with increasing age in the subject pool studied. Presentation of unmagnified visual field peripheral to the telescopic spectacles reduced the immediate VVOR gain-enhancing effect of central magnified vision. These results imply that the VVOR may not be adequate to maintain retinal image stability during head movements when strongly magnifying telescopic spectacles are worn.


2020 ◽  
Author(s):  
Nicola C Anderson ◽  
Walter F. Bischof ◽  
Tom Foulsham ◽  
Alan Kingstone

Research investigating gaze in natural scenes has identified a number of spatial biases in where people look, but it is unclear whether these are partly due to constrained testing environments (e.g., a participant with their head restrained and looking at a landscape image framed within a computer monitor). We examined the extent to which image shape (square vs. circle), image rotation, and image content (landscapes vs. fractal images) influenced eye and head movements in virtual reality (VR). Both the eyes and head were tracked while observers looked at natural scenes in a virtual environment. In line with previous work, we found a bias for saccade directions parallel to the image horizon, regardless of image shape or content. We found that, when allowed to do so, observers move both their eyes and head to explore images. Head rotation, however, was idiosyncratic; some observers rotated a lot, while others did not. Interestingly, the head rotated in line with the rotation of landscape, but not fractal images. That head rotation and gaze direction respond differently to image content suggests that they may be under different control systems. We discuss our findings in relation to current theories on head and eye movement control, and how insights from VR might inform more traditional eye-tracking studies.


2003 ◽  
Vol 3 ◽  
pp. 122-137 ◽  
Author(s):  
George K. Hung

The objective of this article is to determine the effect of three different putting grips (conventional, cross-hand, and one-handed) on variations in eye and head movements during the putting stroke. Seven volunteer novice players, ranging in age from 21 to 22 years, participated in the study. During each experimental session, the subject stood on a specially designed platform covered with artificial turf and putted golf balls towards a standard golf hole. The three different types of grips were tested at two distances: 3 and 9 ft. For each condition, 20 putts were attempted. For each putt, data were recorded over a 3-s interval at a sampling rate of 100 Hz. Eye movements were recorded using a helmet-mounted eye movement monitor. Head rotation about an imaginary axis through the top of the head and its center-of-rotation was measured by means of a potentiometer mounted on a fixed frame and coupled to the helmet. Putter-head motion was measured using a linear array of infrared phototransistors embedded in the platform. The standard deviation (STD, relative to the initial level) was calculated for eye and head movements over the duration of the putt (i.e., from the beginning of the backstroke, through the forward stroke, to impact). The averaged STD for the attempted putts was calculated for each subject. Then, the averaged STDs and other data for the seven subjects were statistically compared across the three grip conditions. The STD of eye movements were greater (p < 0.1) for conventional than cross-hand (9 ft) and one-handed (3 and 9 ft) grips. Also, the STD of head movements were greater (p < 0.1; 3 ft) for conventional than cross-hand and one-handed grips. Vestibulo-ocular responses associated with head rotations could be observed in many 9 ft and some 3 ft putts. The duration of the putt was significantly longer (p < 0.05; 3 and 9 ft) for the one-handed than conventional and cross-hand grips. Finally, performance, or percentage putts made, was significantly better (p <0.05; 9 ft) for cross-hand than conventional grip. The smaller variations, both in eye movements during longer putts and head movements during shorter putts, using cross-hand and one-handed grips may explain why some golfers, based on their playing experience, prefer these over the conventional grip. Also, the longer duration for the one-handed grip, which improves tempo, may explain why some senior players prefer the long-shaft (effectively one-handed grip) putter.


Author(s):  
Giuditta Battistoni ◽  
Diana Cassi ◽  
Marisabel Magnifico ◽  
Giuseppe Pedrazzi ◽  
Marco Di Blasio ◽  
...  

This study investigates the reliability and precision of anthropometric measurements collected from 3D images and acquired under different conditions of head rotation. Various sources of error were examined, and the equivalence between craniofacial data generated from alternative head positions was assessed. 3D captures of a mannequin head were obtained with a stereophotogrammetric system (Face Shape 3D MaxiLine). Image acquisition was performed with no rotations and with various pitch, roll, and yaw angulations. On 3D images, 14 linear distances were measured. Various indices were used to quantify error magnitude, among them the acquisition error, the mean and the maximum intra- and inter-operator measurement error, repeatability and reproducibility error, the standard deviation, and the standard error of errors. Two one-sided tests (TOST) were performed to assess the equivalence between measurements recorded in different head angulations. The maximum intra-operator error was very low (0.336 mm), closely followed by the acquisition error (0.496 mm). The maximum inter-operator error was 0.532 mm, and the highest degree of error was found in reproducibility (0.890 mm). Anthropometric measurements from alternative acquisition conditions resulted in significantly equivalent TOST, with the exception of Zygion (l)–Tragion (l) and Cheek (l)–Tragion (l) distances measured with pitch angulation compared to no rotation position. Face Shape 3D Maxiline has sufficient accuracy for orthodontic and surgical use. Precision was not altered by head orientation, making the acquisition simpler and not constrained to a critical precision as in 2D photographs.


Sign in / Sign up

Export Citation Format

Share Document