Eye position effects in saccadic adaptation

2011 ◽  
Vol 106 (5) ◽  
pp. 2536-2545 ◽  
Author(s):  
Katharina Havermann ◽  
Eckart Zimmermann ◽  
Markus Lappe

Saccades are used by the visual system to explore visual space with the high accuracy of the fovea. The visual error after the saccade is used to adapt the control of subsequent eye movements of the same amplitude and direction in order to keep saccades accurate. Saccadic adaptation is thus specific to saccade amplitude and direction. In the present study we show that saccadic adaptation is also specific to the initial position of the eye in the orbit. This is useful, because saccades are normally accompanied by head movements and the control of combined head and eye movements depends on eye position. Many parts of the saccadic system contain eye position information. Using the intrasaccadic target step paradigm, we adaptively reduced the amplitude of reactive saccades to a suddenly appearing target at a selective position of the eyes in the orbitae and tested the resulting amplitude changes for the same saccade vector at other starting positions. For central adaptation positions the saccade amplitude reduction transferred completely to eccentric starting positions. However, for adaptation at eccentric starting positions, there was a reduced transfer to saccades from central starting positions or from eccentric starting positions in the opposite hemifield. Thus eye position information modifies the transfer of saccadic amplitude changes in the adaptation of reactive saccades. A gain field mechanism may explain the eye position dependence found.

2008 ◽  
Vol 99 (5) ◽  
pp. 2470-2478 ◽  
Author(s):  
André Kaminiarz ◽  
Bart Krekelberg ◽  
Frank Bremmer

The mechanisms underlying visual perceptual stability are usually investigated using voluntary eye movements. In such studies, errors in perceptual stability during saccades and pursuit are commonly interpreted as mismatches between actual eye position and eye-position signals in the brain. The generality of this interpretation could in principle be tested by investigating spatial localization during reflexive eye movements whose kinematics are very similar to those of voluntary eye movements. Accordingly, in this study, we determined mislocalization of flashed visual targets during optokinetic afternystagmus (OKAN). These eye movements are quite unique in that they occur in complete darkness and are generated by subcortical control mechanisms. We found that during horizontal OKAN slow phases, subjects mislocalize targets away from the fovea in the horizontal direction. This corresponds to a perceived expansion of visual space and is unlike mislocalization found for any other voluntary or reflexive eye movement. Around the OKAN fast phases, we found a bias in the direction of the fast phase prior to its onset and opposite to the fast-phase direction thereafter. Such a biphasic modulation has also been reported in the temporal vicinity of saccades and during optokinetic nystagmus (OKN). A direct comparison, however, showed that the modulation during OKAN was much larger and occurred earlier relative to fast-phase onset than during OKN. A simple mismatch between the current eye position and the eye-position signal in the brain is unlikely to explain such disparate results across similar eye movements. Instead, these data support the view that mislocalization arises from errors in eye-centered position information.


1991 ◽  
Vol 1 (2) ◽  
pp. 161-170
Author(s):  
Jean-Louis Vercher ◽  
Gabriel M. Gauthier

To maintain clear vision, the images on the retina must remain reasonably stable. Head movements are generally dealt with successfully by counter-rotation of the eyes induced by the combined actions of the vestibulo-ocular reflex (VOR) and the optokinetic reflex. A problem of importance relates to the value of the so-called intrinsic gain of the VOR (VORG) in man, and how this gain is modulated to provide appropriate eye movements. We have studied these problems in two situations: 1. fixation of a stationary object of the visual space while the head moves; 2. fixation of an object moving with the head. These two situations were compared to a basic condition in which no visual target was allowed in order to induce “pure” VOR. Eye movements were recorded in seated subjects during stationary sinusoidal and transient rotations around the vertical axis. Subjects were in total darkness (DARK condition) and involved in mental arithmetic. Alternatively, they were provided with a small foveal target, either fixed with respect to earth (earth-fixed target: EFT condition), or moving with them (chair-fixed-target: CFT condition). The stationary rotation experiment was used as baseline for the ensuing experiment and yielded control data in agreement with the literature. In all 3 visual conditions, typical responses to transient rotations were rigorously identical during the first 200 ms. They showed, sequentially, a 16-ms delay of the eye behind the head and a rapid increase in eye velocity during 75 to 80 ms, after which the average VORG was 0.9 ± 0.15. During the following 50 to 100 ms, the gain remained around 0.9 in all three conditions. Beyond 200 ms, the VORG remained around 0.9 in DARK and increased slowly towards 1 or decreased towards zero in the EFT and CFT conditions, respectively. The time-course of the later events suggests that visual tracking mechanisms came into play to reduce retinal slip through smooth pursuit, and position error through saccades. Our data also show that in total darkness VORG is set to 0.9 in man. Lower values reported in the literature essentially reflect predictive properties of the vestibulo-ocular mechanism, particularly evident when the input signal is a sinewave.


2019 ◽  
Vol 122 (5) ◽  
pp. 1909-1917
Author(s):  
Svenja Gremmler ◽  
Markus Lappe

We investigated whether the proprioceptive eye position signal after the execution of a saccadic eye movement is used to estimate the accuracy of the movement. If so, saccadic adaptation, the mechanism that maintains saccade accuracy, could use this signal in a similar way as it uses visual feedback after the saccade. To manipulate the availability of the proprioceptive eye position signal we utilized the finding that proprioceptive eye position information builds up gradually after a saccade over a time interval comparable to typical saccade latencies. We confined the retention time of gaze at the saccade landing point by asking participants to make fast return saccades to the fixation point that preempt the usability of proprioceptive eye position signals. In five experimental conditions we measured the influence of the visual and proprioceptive feedback, together and separately, on the development of adaptation. We found that the adaptation of the previously shortened saccades in the case of visual feedback being unavailable after the saccade was significantly weaker when the use of proprioceptive eye position information was impaired by fast return saccades. We conclude that adaptation can be driven by proprioceptive eye position feedback. NEW & NOTEWORTHY We show that proprioceptive eye position information is used after a saccade to estimate motor error and adapt saccade control. Previous studies on saccadic adaptation focused on visual feedback about saccade accuracy. A multimodal error signal combining visual and proprioceptive information is likely more robust. Moreover, combining proprioceptive and visual measures of saccade performance can be helpful to keep vision, proprioception, and motor control in alignment and produce a coherent representation of space.


2012 ◽  
Vol 108 (10) ◽  
pp. 2819-2826 ◽  
Author(s):  
Svenja Wulff ◽  
Annalisa Bosco ◽  
Katharina Havermann ◽  
Giacomo Placenti ◽  
Patrizia Fattori ◽  
...  

The saccadic amplitude of humans and monkeys can be adapted using intrasaccadic target steps in the McLaughlin paradigm. It is generally believed that, as a result of a purely retinal reference frame, after adaptation of a saccade of a certain amplitude and direction, saccades of the same amplitude and direction are all adapted to the same extent, independently from the initial eye position. However, recent studies in humans have put the pure retinal coding in doubt by revealing that the initial eye position has an effect on the transfer of adaptation to saccades of different starting points. Since humans and monkeys show some species differences in adaptation, we tested the eye position dependence in monkeys. Two trained Macaca fascicularis performed reactive rightward saccades from five equally horizontally distributed starting positions. All saccades were made to targets with the same retinotopic motor vector. In each session, the saccades that started at one particular initial eye position, the adaptation position, were adapted to shorter amplitude, and the adaptation of the saccades starting at the other four positions was measured. The results show that saccades that started at the other positions were less adapted than saccades that started at the adaptation position. With increasing distance between the starting position of the test saccade and the adaptation position, the amplitude change of the test saccades decreased with a Gaussian profile. We conclude that gain-decreasing saccadic adaptation in macaques is specific to the initial eye position at which the adaptation has been induced.


2012 ◽  
Vol 5 (4) ◽  
Author(s):  
Antoine Coutrot ◽  
Nathalie Guyader ◽  
Gelu Ionescu ◽  
Alice Caplier

Models of visual attention rely on visual features such as orientation, intensity or motion to predict which regions of complex scenes attract the gaze of observers. So far, sound has never been considered as a possible feature that might influence eye movements. Here, we evaluate the impact of non-spatial sound on the eye movements of observers watching videos. We recorded eye movements of 40 participants watching assorted videos with and without their related soundtracks. We found that sound impacts on eye position, fixation duration and saccade amplitude. The effect of sound is not constant across time but becomes significant around one second after the beginning of video shots.


2008 ◽  
Vol 100 (6) ◽  
pp. 3375-3393 ◽  
Author(s):  
Edward G. Freedman

When the head is free to move, changes in the direction of the line of sight (gaze shifts) can be accomplished using coordinated movements of the eyes and head. During repeated gaze shifts between the same two targets, the amplitudes of the saccadic eye movements and movements of the head vary inversely as a function of the starting positions of the eyes in the orbits. In addition, as head-movement amplitudes and velocities increase, saccade velocities decline. Taken together these observations lead to a reversal in the expected correlation between saccade duration and amplitude: small-amplitude saccades associated with large head movements can have longer durations than larger-amplitude saccades associated with small head movements. The data in this report indicate that this reversal occurs during gaze shifts along the horizontal meridian and also when considering the horizontal component of oblique saccades made when the eyes begin deviated only along the horizontal meridian. Under these conditions, it is possible to determine whether the variability in the duration of the constant amplitude vertical component of oblique saccades is accounted for better by increases in horizontal saccade amplitude or increases in horizontal saccade duration. Results show that vertical saccade duration can be inversely related to horizontal saccade amplitude (or unrelated to it) but that horizontal saccade duration is an excellent predictor of vertical saccade duration. Modifications to existing hypotheses of gaze control are assessed based on these new observations and a mechanism is proposed that can account for these data.


1988 ◽  
Vol 1 (2) ◽  
pp. 239-244 ◽  
Author(s):  
James T. McIlwain

AbstractThe trajectories of saccadic eye movements evoked electrically from many brain structures are dependent to some degree on the initial position of the eye. Under certain conditions, likely to occur in stimulation experiments, local feedback models of the saccadic system can yield eye movements which behave in this way. The models in question assume that an early processing stage adds an internal representation of eye position to retinal error to yield a signal representing target position with respect to the head. The saccadic system is driven by the difference between this signal and one representing the current position of the eye. Albano & Wurtz (1982) pointed out that lesions perturbing the computation of eye position with respect to the head can result in initial position dependence of visually evoked saccades. It is shown here that position-dependent saccades will also result if electrical stimulation evokes a signal equivalent to retinal error but fails to effect a complete addition of eye position to this signal. Also, when multiple or staircase saccades are produced, as during long stimulus trains, they will have identical directions but decrease progressively in amplitude by a factor related to the fraction of added eye position.


1986 ◽  
Vol 56 (1) ◽  
pp. 196-207 ◽  
Author(s):  
A. McKenzie ◽  
S. G. Lisberger

Monkeys were trained to make saccades to briefly flashed targets. We presented the flash during smooth pursuit of another target, so that there was a smooth change in eye position after the flash. We could then determine whether the flash-evoked saccades compensated for the intervening smooth eye movements to point the eyes at the position of the flash in space. We defined the "retinal error" as the vector from the position of the eye at the time of the flash to the position of the target. We defined "spatial error" as the vector from the position of the eye at the time of the saccade to the position of the flashed target in space. The direction of the saccade (in polar coordinates) was more highly correlated with the direction of the retinal error than with the direction of the spatial error. Saccade amplitude was also better correlated with the amplitude of the retinal error. We obtained the same results whether the flash was presented during pursuit with the head fixed or during pursuit with combined eye-head movements. Statistical analysis demonstrated that the direction of the saccade was determined only by the retinal error in two of the three monkeys. In the third monkey saccade direction was determined primarily by retinal error but had a consistent bias toward spatial error. The bias can be attributed to this monkey's earlier practice in which the flashed target was reilluminated so he could ultimately make a saccade to the correct position in space. These data suggest that the saccade generator does not normally use nonvisual feedback about smooth changes in eye or gaze position. In two monkeys we also provided sequential target flashes during pursuit with the second flash timed so that it occurred just before the first saccade. As above, the first saccade was appropriate for the retinal error provided by the first flash. The second saccade compensated for the first and pointed the eyes at the position of the second target in space. We conclude, as others have before (12, 21), that the saccade generator receives feedback about its own output, saccades. Our results require revision of existing models of the neural network that generates saccades. We suggest two models that retain the use of internal feedback suggested by others. We favor a model that accounts for our data by assuming that internal feedback originates directly from the output of the saccade generator and reports only saccadic changes in eye position.


1984 ◽  
Vol 52 (4) ◽  
pp. 724-742 ◽  
Author(s):  
M. C. Chubb ◽  
A. F. Fuchs ◽  
C. A. Scudder

To elucidate how information is processed in the vestibuloocular reflex (VOR) pathways subserving vertical eye movements, extracellular single-unit recordings were obtained from the vestibular nuclei of alert monkeys trained to track a visual target with their eyes while undergoing sinusoidal pitch oscillations (0.2-1.0 Hz). Units with activity related to vertical vestibular stimulation and/or eye movements were classified as either vestibular units (n = 53), vestibular plus eye-position units (n = 30), pursuit units (n = 10), or miscellaneous units (n = 5), which had various combinations of head- and eye-movement sensitivities. Vestibular units discharged in relation to head rotation, but not to smooth eye movements. On average, these units fired approximately in phase with head velocity; however, a broad range of phase shifts was observed. The activities of 8% of the vestibular units were related to saccades. Vestibular plus eye-position units fired in relation to head velocity and eye position and, in addition, usually to eye velocity. Their discharge rates increased for eye and head movements in opposite directions. During combined head and eye movements, the modulation in unit activity was not significantly different from the sum of the modulations during each alone. For saccades, the unit firing rate either decreased to zero or was unaffected. Pursuit units discharged in relation to eye position, eye velocity, or both, but not to head movements alone. For saccades, unit activity usually either paused or was unaffected. The eye-movement-related activities of the vestibular plus eye-position and pursuit units were not significantly different. A quantitative comparison of their firing patterns suggests that vestibular, vestibular plus eye-position, and pursuit neurons in the vestibular nucleus could provide mossy fiber inputs to the flocculus. In addition, the vertical vestibular plus eye-position neurons have discharge patterns similar to those of fibers recorded rostrally in the medial longitudinal fasciculus. Therefore, our data support the view that vertical vestibular plus eye-position neurons are interneurons of the VOR.


1999 ◽  
Vol 81 (6) ◽  
pp. 2720-2736 ◽  
Author(s):  
H.H.L.M. Goossens ◽  
A. J. van Opstal

Influence of head position on the spatial representation of acoustic targets. Sound localization in humans relies on binaural differences (azimuth cues) and monaural spectral shape information (elevation cues) and is therefore the result of a neural computational process. Despite the fact that these acoustic cues are referenced with respect to the head, accurate eye movements can be generated to sounds in complete darkness. This ability necessitates the use of eye position information. So far, however, sound localization has been investigated mainly with a fixed head position, usually straight ahead. Yet the auditory system may rely on head motor information to maintain a stable and spatially accurate representation of acoustic targets in the presence of head movements. We therefore studied the influence of changes in eye-head position on auditory-guided orienting behavior of human subjects. In the first experiment, we used a visual-auditory double-step paradigm. Subjects made saccadic gaze shifts in total darkness toward brief broadband sounds presented before an intervening eye-head movement that was evoked by an earlier visual target. The data show that the preceding displacements of both eye and head are fully accounted for, resulting in spatially accurate responses. This suggests that auditory target information may be transformed into a spatial (or body-centered) frame of reference. To further investigate this possibility, we exploited the unique property of the auditory system that sound elevation is extracted independently from pinna-related spectral cues. In the absence of such cues, accurate elevation detection is not possible, even when head movements are made. This is shown in a second experiment where pure tones were localized at a fixed elevation that depended on the tone frequency rather than on the actual target elevation, both under head-fixed and -free conditions. To test, in a third experiment, whether the perceived elevation of tones relies on a head- or space-fixed target representation, eye movements were elicited toward pure tones while subjects kept their head in different vertical positions. It appeared that each tone was localized at a fixed, frequency-dependent elevation in space that shifted to a limited extent with changes in head elevation. Hence information about head position is used under static conditions too. Interestingly, the influence of head position also depended on the tone frequency. Thus tone-evoked ocular saccades typically showed a partial compensation for changes in static head position, whereas noise-evoked eye-head saccades fully compensated for intervening changes in eye-head position. We propose that the auditory localization system combines the acoustic input with head-position information to encode targets in a spatial (or body-centered) frame of reference. In this way, accurate orienting responses may be programmed despite intervening eye-head movements. A conceptual model, based on the tonotopic organization of the auditory system, is presented that may account for our findings.


Sign in / Sign up

Export Citation Format

Share Document