scholarly journals P3-8: Eye Position Distribution Depending on Head Orientation in Natural Scene Viewing

i-Perception ◽  
10.1068/if719 ◽  
2012 ◽  
Vol 3 (9) ◽  
pp. 719-719 ◽  
Author(s):  
Ryoichi Nakashima ◽  
Yu Fang ◽  
Kazumichi Matsumiya ◽  
Rumi Tokunaga ◽  
Ichiro Kuriki ◽  
...  
2015 ◽  
Vol 3 (2) ◽  
pp. 149-154 ◽  
Author(s):  
Yu Fang ◽  
Masaki Emoto ◽  
Ryoichi Nakashima ◽  
Kazumichi Matsumiya ◽  
Ichiro Kuriki ◽  
...  

2012 ◽  
Vol 12 (9) ◽  
pp. 1248-1248 ◽  
Author(s):  
Y. Fang ◽  
R. Nakashima ◽  
K. Matsumiya ◽  
R. Tokunaga ◽  
I. Kuriki ◽  
...  

2017 ◽  
Vol 114 (10) ◽  
pp. 2771-2776 ◽  
Author(s):  
Hildward Vandormael ◽  
Santiago Herce Castañón ◽  
Jan Balaguer ◽  
Vickie Li ◽  
Christopher Summerfield

Humans move their eyes to gather information about the visual world. However, saccadic sampling has largely been explored in paradigms that involve searching for a lone target in a cluttered array or natural scene. Here, we investigated the policy that humans use to overtly sample information in a perceptual decision task that required information from across multiple spatial locations to be combined. Participants viewed a spatial array of numbers and judged whether the average was greater or smaller than a reference value. Participants preferentially sampled items that were less diagnostic of the correct answer (“inlying” elements; that is, elements closer to the reference value). This preference to sample inlying items was linked to decisions, enhancing the tendency to give more weight to inlying elements in the final choice (“robust averaging”). These findings contrast with a large body of evidence indicating that gaze is directed preferentially to deviant information during natural scene viewing and visual search, and suggest that humans may sample information “robustly” with their eyes during perceptual decision-making.


1997 ◽  
Vol 78 (4) ◽  
pp. 2203-2216 ◽  
Author(s):  
Bernhard J. M. Hess ◽  
Dora E. Angelaki

Hess, Bernhard J. M. and Dora E. Angelaki. Kinematic principles of primate rotational vestibulo-ocular reflex. II. Gravity-dependent modulation of primary eye position. J. Neurophysiol. 78: 2203–2216, 1997. The kinematic constraints of three-dimensional eye positions were investigated in rhesus monkeys during passive head and body rotations relative to gravity. We studied fast and slow phase components of the vestibulo-ocular reflex (VOR) elicited by constant-velocity yaw rotations and sinusoidal oscillations about an earth-horizontal axis. We found that the spatial orientation of both fast and slow phase eye positions could be described locally by a planar surface with torsional variation of <2.0 ± 0.4° (displacement planes) that systematically rotated and/or shifted relative to Listing's plane. In supine/prone positions, displacement planes pitched forward/backward; in left/right ear-down positions, displacement planes were parallel shifted along the positive/negative torsional axis. Dynamically changing primary eye positions were computed from displacement planes. Torsional and vertical components of primary eye position modulated as a sinusoidal function of head orientation in space. The torsional component was maximal in ear-down positions and approximately zero in supine/prone orientations. The opposite was observed for the vertical component. Modulation of the horizontal component of primary eye position exhibited a more complex dependence. In contrast to the torsional component, which was relatively independent of rotational speed, modulation of the vertical and horizontal components of primary position depended strongly on the speed of head rotation (i.e., on the frequency of oscillation of the gravity vector component): the faster the head rotated relative to gravity, the larger was the modulation. Corresponding results were obtained when a model based on a sinusoidal dependence of instantaneous displacement planes (and primary eye position) on head orientation relative to gravity was fitted to VOR fast phase positions. When VOR fast phase positions were expressed relative to primary eye position estimated from the model fits, they were confined approximately to a single plane with a small torsional standard deviation (∼1.4–2.6°). This reduced torsional variation was in contrast to the large torsional spread (well >10–15°) of fast phase positions when expressed relative to Listing's plane. We conclude that primary eye position depends dynamically on head orientation relative to space rather than being fixed to the head. It defines a gravity-dependent coordinate system relative to which the torsional variability of eye positions is minimized even when the head is moved passively and vestibulo-ocular reflexes are evoked. In this general sense, Listing's law is preserved with respect to an otolith-controlled reference system that is defined dynamically by gravity.


Author(s):  
R. Calen Walshe ◽  
Antje Nuthmann

AbstractResearch on eye-movement control during natural scene viewing has investigated the degree to which the duration of individual fixations can be immediately adjusted to ongoing visual-cognitive processing demands. Results from several studies using the fixation-contingent scene quality paradigm suggest that the timing of fixations adapts to stimulus changes that occur on a fixation-to-fixation basis. Analysis of fixation-duration distributions has revealed that saccade-contingent degradations and enhancements of the scene stimulus have two qualitatively distinct types of influence. The surprise effect begins early in a fixation and is tied to surprising visual events such as unexpected stimulus changes. The encoding effect is tied to difficulties in visual-cognitive processing and occurs relatively late within a fixation. Here, we formalize an existing descriptive account of these two effects (referred to as the dual-process account) by using stochastic simulations. In the computational model, surprise and encoding related influences are implemented as time-dependent changes in the rate at which saccade timing and programming are completed during critical fixations. The model was tested on data from two experiments in which the luminance of the scene image was either decreased or increased during selected critical fixations (Walshe & Nuthmann, Vision Research, 100, 38–46 2014). A counterfactual method was used to remove model components and to identify their specific influence on the fixation_duration distributions. The results suggest that the computational dual-process model provides a good account for the data from the luminance-change studies. We describe how the simulations can be generalized to explain a diverse set of experimental results.


2021 ◽  
Vol 57 (7) ◽  
pp. 1025-1041
Author(s):  
Katherine I. Pomaranski ◽  
Taylor R. Hayes ◽  
Mee-Kyoung Kwon ◽  
John M. Henderson ◽  
Lisa M. Oakes

2020 ◽  
Vol 20 (4) ◽  
pp. 15
Author(s):  
Wolfgang Einhäuser ◽  
Charlotte Atzert ◽  
Antje Nuthmann

Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 237-237
Author(s):  
J Li ◽  
M M Cohen ◽  
C W DeRoshia ◽  
L T Guzy

Perceived eye position and/or the perceived location of visual targets are altered when the orientation of the surrounding visual environment (Cohen et al, 1995 Perception & Psychophysics571 433) or that of the observer (Cohen and Guzy, 1995 Aviation, Space, and Environmental Medicine66 505) is changed. Fourteen subjects used biteboards as they lay on a rotary bed that was oriented head-down −15°, −7.5°, supine, head-up +7.5°, and +15°. In the dark, subjects directed their gaze and set a target to the apparent zenith (exocentric location); they also gazed at a subjective ‘straight ahead’ position with respect to their head (egocentric location). Angular deviations of target settings and changes in vertical eye position were recorded with an ISCAN infrared tracking system. Results indicated that, for exocentric locations, the eyes deviate systematically from the true zenith. The gain for compensating changes in head orientation was 0.69 and 0.73 for gaze direction and target settings, respectively. In contrast, ‘straight ahead’ eye positions were not significantly affected by changes in the subject's orientation. We conclude that subjects make systematic errors when directing their gaze to an exocentric location in near-supine positions. This suggests a systematic bias in the integration of extra-ocular signals with information regarding head orientation. The bias may result from underestimating changes in the orientation of the head in space. In contrast, for egocentric locations, where head orientation information can potentially be discarded, gaze directions were unaffected by head orientation near supine.


Sign in / Sign up

Export Citation Format

Share Document