Saccades to remembered targets: the effects of smooth pursuit and illusory stimulus motion

1996 ◽  
Vol 76 (6) ◽  
pp. 3617-3632 ◽  
Author(s):  
A. Z. Zivotofsky ◽  
K. G. Rottach ◽  
L. Averbuch-Heller ◽  
A. A. Kori ◽  
C. W. Thomas ◽  
...  

1. Measurements were made in four normal human subjects of the accuracy of saccades to remembered locations of targets that were flashed on a 20 x 30 deg random dot display that was either stationary or moving horizontally and sinusoidally at +/-9 deg at 0.3 Hz. During the interval between the target flash and the memory-guided saccade, the “memory period” (1.4 s), subjects either fixated a stationary spot or pursued a spot moving vertically sinusoidally at +/-9 deg at 0.3 Hz. 2. When saccades were made toward the location of targets previously flashed on a stationary background as subjects fixated the stationary spot, median saccadic error was 0.93 deg horizontally and 1.1 deg vertically. These errors were greater than for saccades to visible targets, which had median values of 0.59 deg horizontally and 0.60 deg vertically. 3. When targets were flashed as subjects smoothly pursued a spot that moved vertically across the stationary background, median saccadic error was 1.1 deg horizontally and 1.2 deg vertically, thus being of similar accuracy to when targets were flashed during fixation. In addition, the vertical component of the memory-guided saccade was much more closely correlated with the “spatial error” than with the “retinal error” this indicated that, when programming the saccade, the brain had taken into account eye movements that occurred during the memory period. 4. When saccades were made to targets flashed during attempted fixation of a stationary spot on a horizontally moving background, a condition that produces a weak Duncker-type illusion of horizontal movement of the primary target, median saccadic error increased horizontally to 3.2 deg but was 1.1 deg vertically. 5. When targets were flashed as subjects smoothly pursued a spot that moved vertically on the horizontally moving background, a condition that induces a strong illusion of diagonal target motion, median saccadic error was 4.0 deg horizontally and 1.5 deg vertically; thus the horizontal error was greater than under any other experimental condition. 6. In most trials, the initial saccade to the remembered target was followed by additional saccades while the subject was still in darkness. These secondary saccades, which were executed in the absence of visual feedback, brought the eye closer to the target location. During paradigms involving horizontal background movement, these corrections were more prominent horizontally than vertically. 7. Further measurements were made in two subjects to determine whether inaccuracy of memory-guided saccades, in the horizontal plane, was due to mislocalization at the time that the target flashed, misrepresentation of the trajectory of the pursuit eye movement during the memory period, or both. 8. The magnitude of the saccadic error, both with and without corrections made in darkness, was mislocalized by approximately 30% of the displacement of the background at the time that the target flashed. The magnitude of the saccadic error also was influenced by net movement of the background during the memory period, corresponding to approximately 25% of net background movement for the initial saccade and approximately 13% for the final eye position achieved in darkness. 9. We formulated simple linear models to test specific hypotheses about which combinations of signals best describe the observed saccadic amplitudes. We tested the possibilities that the brain made an accurate memory of target location and a reliable representation of the eye movement during the memory period, or that one or both of these was corrupted by the illusory visual stimulus. Our data were best accounted for by a model in which both the working memory of target location and the internal representation of the horizontal eye movements were corrupted by the illusory visual stimulus. We conclude that extraretinal signals played only a minor role, in comparison with visual estimates of the direction of gaze, in planning eye movements to remembered targ

2019 ◽  
Vol 116 (6) ◽  
pp. 2027-2032 ◽  
Author(s):  
Jasper H. Fabius ◽  
Alessio Fracasso ◽  
Tanja C. W. Nijboer ◽  
Stefan Van der Stigchel

Humans move their eyes several times per second, yet we perceive the outside world as continuous despite the sudden disruptions created by each eye movement. To date, the mechanism that the brain employs to achieve visual continuity across eye movements remains unclear. While it has been proposed that the oculomotor system quickly updates and informs the visual system about the upcoming eye movement, behavioral studies investigating the time course of this updating suggest the involvement of a slow mechanism, estimated to take more than 500 ms to operate effectively. This is a surprisingly slow estimate, because both the visual system and the oculomotor system process information faster. If spatiotopic updating is indeed this slow, it cannot contribute to perceptual continuity, because it is outside the temporal regime of typical oculomotor behavior. Here, we argue that the behavioral paradigms that have been used previously are suboptimal to measure the speed of spatiotopic updating. In this study, we used a fast gaze-contingent paradigm, using high phi as a continuous stimulus across eye movements. We observed fast spatiotopic updating within 150 ms after stimulus onset. The results suggest the involvement of a fast updating mechanism that predictively influences visual perception after an eye movement. The temporal characteristics of this mechanism are compatible with the rate at which saccadic eye movements are typically observed in natural viewing.


2021 ◽  
Author(s):  
Peyman Shokrollahi

Measures of sleep physiology, not obvious to the human eye, may provide important clues to disease states, and responses to therapy. A significant amount of eye movement data is not attended to clinically in routine sleep studies because these data are too long, about six to eight hours in duration, and they are also mixed with many unknown artifacts usually produced from EEG signals or other activities. This research describes how eye movements were different in depressed patients who used antidepressant medications, compared to those who did not. The goal is to track antidepressant medications effects on sleep eye movements. Clinically used SSRIs such as Prozac (Fluoxetine), Celexa (Citalopram), Zoloft (Sertraline), the SNRI Effexor (Venlafaxine) have been considered in this study to assess the possible connections between eye movements recorded during sleep and serotonin activities. The novelty of this research is in the assessment of sleep eye movement, in order to track the antidepressant medications' effect on the brain through EOG channels. EOG analysis is valuable because it is a noninvasive method, and the following research is looking for findings that are invisible to the eyes of professional clinicians. This thesis focuses on quantifying sleep eye movements, with two techniques: autoregressive modeling and wavelet analysis. The eye movement detection software (EMDS) with more than 1500 lines was developed for detecting sleep eye movements. AR coefficients were derived from the sleep eye movements of the patients who were exposed to antidepressant medications, and those who were not, and then they are classified by means of linear discriminant analysis. also for wavelet analysis, discrete wavelet coefficients have been used for classifying sleep eye movements of the patients who were exposed to medication and those who were not.


2021 ◽  
Author(s):  
Peyman Shokrollahi

Measures of sleep physiology, not obvious to the human eye, may provide important clues to disease states, and responses to therapy. A significant amount of eye movement data is not attended to clinically in routine sleep studies because these data are too long, about six to eight hours in duration, and they are also mixed with many unknown artifacts usually produced from EEG signals or other activities. This research describes how eye movements were different in depressed patients who used antidepressant medications, compared to those who did not. The goal is to track antidepressant medications effects on sleep eye movements. Clinically used SSRIs such as Prozac (Fluoxetine), Celexa (Citalopram), Zoloft (Sertraline), the SNRI Effexor (Venlafaxine) have been considered in this study to assess the possible connections between eye movements recorded during sleep and serotonin activities. The novelty of this research is in the assessment of sleep eye movement, in order to track the antidepressant medications' effect on the brain through EOG channels. EOG analysis is valuable because it is a noninvasive method, and the following research is looking for findings that are invisible to the eyes of professional clinicians. This thesis focuses on quantifying sleep eye movements, with two techniques: autoregressive modeling and wavelet analysis. The eye movement detection software (EMDS) with more than 1500 lines was developed for detecting sleep eye movements. AR coefficients were derived from the sleep eye movements of the patients who were exposed to antidepressant medications, and those who were not, and then they are classified by means of linear discriminant analysis. also for wavelet analysis, discrete wavelet coefficients have been used for classifying sleep eye movements of the patients who were exposed to medication and those who were not.


2016 ◽  
pp. S365-S371 ◽  
Author(s):  
F. JAGLA

It is accepted that the formulation of the motor program in the brain is not only the perceptual and motor function but also the cognitive one. Therefore it is not surprising that the execution of saccadic eye movements can by substantially affected be the on-going mental activity of a given person. Not only the distribution of attention, but also the focusing the attention may influence the main gain of saccades, their accuracy. Patients suffering from mental disorders have strongly engaged their attention focused at their mental processes. The nature of their problems may be linked to perceptual and/or analytical processing. Such so-called mental set may significantly affect their oculomotor activity in the course of their saccadic eye movement examinations. This short comment points out not only to the influence of the contextually guided and generated saccadic eye movements upon their accuracy but also to the distribution and focusing the attention. The effect of the functional brain asymmetry upon the visually generated saccades and the possible effect of biologically active substances upon the voluntary generated saccades are briefly mentioned. All these influences should be taken into account when planning the saccadic eye movement task. It may be concluded that the repetition of the same oculomotor task in a given person has to be introduced. This may help to follow the effect of complex therapy namely.


2007 ◽  
Vol 98 (1) ◽  
pp. 537-541 ◽  
Author(s):  
Eliana M. Klier ◽  
Dora E. Angelaki ◽  
Bernhard J. M. Hess

As we move our bodies in space, we often undergo head and body rotations about different axes—yaw, pitch, and roll. The order in which we rotate about these axes is an important factor in determining the final position of our bodies in space because rotations, unlike translations, do not commute. Does our brain keep track of the noncommutativity of rotations when computing changes in head and body orientation and then use this information when planning subsequent motor commands? We used a visuospatial updating task to investigate whether saccades to remembered visual targets are accurate after intervening, whole-body rotational sequences. The sequences were reversed, either yaw then roll or roll then yaw, such that the final required eye movements to reach the same space-fixed target were different in each case. While each subject performed consistently irrespective of target location and rotational combination, we found great intersubject variability in their capacity to update. The distance between the noncommutative endpoints was, on average, half of that predicted by perfect noncommutativity. Nevertheless, most subjects did make eye movements to distinct final endpoint locations and not to one unique location in space as predicted by a commutative model. In addition, their noncommutative performance significantly improved when their less than ideal updating performance was taken into account. Thus the brain can produce movements that are consistent with the processing of noncommutative rotations, although it is often poor in using internal estimates of rotation for updating.


Author(s):  
Fiona Mulvey

This chapter introduces the basics of eye anatomy, eye movements and vision. It will explain the concepts behind human vision sufficiently for the reader to understand later chapters in the book on human perception and attention, and their relationship to (and potential measurement with) eye movements. We will first describe the path of light from the environment through the structures of the eye and on to the brain, as an introduction to the physiology of vision. We will then describe the image registered by the eye, and the types of movements the eye makes in order to perceive the environment as a cogent whole. This chapter explains how eye movements can be thought of as the interface between the visual world and the brain, and why eye movement data can be analysed not only in terms of the environment, or what is looked at, but also in terms of the brain, or subjective cognitive and emotional states. These two aspects broadly define the scope and applicability of eye movements technology in research and in human computer interaction in later sections of the book.


2008 ◽  
Vol 99 (5) ◽  
pp. 2281-2290 ◽  
Author(s):  
Stan Van Pelt ◽  
W. Pieter Medendorp

We tested between two coding mechanisms that the brain may use to retain distance information about a target for a reaching movement across vergence eye movements. If the brain was to encode a retinal disparity representation (retinal model), i.e., target depth relative to the plane of fixation, each vergence eye movement would require an active update of this representation to preserve depth constancy. Alternatively, if the brain was to store an egocentric distance representation of the target by integrating retinal disparity and vergence signals at the moment of target presentation, this representation should remain stable across subsequent vergence shifts (nonretinal model). We tested between these schemes by measuring errors of human reaching movements ( n = 14 subjects) to remembered targets, briefly presented before a vergence eye movement. For comparison, we also tested their directional accuracy across version eye movements. With intervening vergence shifts, the memory-guided reaches showed an error pattern that was based on the new eye position and on the depth of the remembered target relative to that position. This suggests that target depth is recomputed after the gaze shift, supporting the retinal model. Our results also confirm earlier literature showing retinal updating of target direction. Furthermore, regression analyses revealed updating gains close to one for both target depth and direction, suggesting that the errors arise after the updating stage during the subsequent reference frame transformations that are involved in reaching.


1995 ◽  
Vol 81 (3) ◽  
pp. 755-762 ◽  
Author(s):  
Eugene H. Galluscio ◽  
Pamela Paradzinski

Conjugate lateral eye movements induced by task-specific reflective thought were examined in 10 dextral men. Verbal and spatial stimuli designed to activate reflective thought in the left (verbal) and right (spatial) cerebral hemispheres of the brain were presented tachistoscopically in a darkened environment. Eye movements during reflective thought were monitored and scored using an infrared eye-tracking device. Reflective thought induced by the spatial task produced significantly more leftward conjugate lateral eye movement. The verbal task tended to produce more rightward and upward movements. The results are viewed as consistent with a task-specific brain-hemispheric activation model of contralateral conjugate eye movements during reflective thought.


2016 ◽  
Vol 6 (1) ◽  
Author(s):  
Hamidreza Namazi ◽  
Vladimir V. Kulish ◽  
Amin Akrami

Abstract One of the major challenges in vision research is to analyze the effect of visual stimuli on human vision. However, no relationship has been yet discovered between the structure of the visual stimulus and the structure of fixational eye movements. This study reveals the plasticity of human fixational eye movements in relation to the ‘complex’ visual stimulus. We demonstrated that the fractal temporal structure of visual dynamics shifts towards the fractal dynamics of the visual stimulus (image). The results showed that images with higher complexity (higher fractality) cause fixational eye movements with lower fractality. Considering the brain, as the main part of nervous system that is engaged in eye movements, we analyzed the governed Electroencephalogram (EEG) signal during fixation. We have found out that there is a coupling between fractality of image, EEG and fixational eye movements. The capability observed in this research can be further investigated and applied for treatment of different vision disorders.


2019 ◽  
Author(s):  
Saad Idrees ◽  
Matthias P. Baumann ◽  
Felix Franke ◽  
Thomas A. Münch ◽  
Ziad M. Hafed

AbstractVisual sensitivity, probed through perceptual detectability of very brief visual stimuli, is strongly impaired around the time of rapid eye movements. This robust perceptual phenomenon, called saccadic suppression, is frequently attributed to active suppressive signals that are directly derived from eye movement commands. Here we show instead that visual-only mechanisms, activated by saccade-induced image shifts, can account for all perceptual properties of saccadic suppression that we have investigated. Such mechanisms start at, but are not necessarily exclusive to, the very first stage of visual processing in the brain, the retina. Critically, neural suppression originating in the retina outlasts perceptual suppression around the time of saccades, suggesting that extra-retinal movement-related signals, rather than causing suppression, may instead act to shorten it. Our results demonstrate a far-reaching contribution of visual processing mechanisms to perceptual saccadic suppression, starting in the retina, without the need to invoke explicit motor-based suppression commands.


Sign in / Sign up

Export Citation Format

Share Document