scholarly journals Decoupling eye movements from retinal image motion reveals active fixation control

2019 ◽  
Vol 19 (10) ◽  
pp. 148
Author(s):  
Michele A Cox ◽  
Norick R Bowers ◽  
Janis Intoy ◽  
Martina Poletti ◽  
Michele Rucci
2011 ◽  
Vol 105 (4) ◽  
pp. 1531-1545 ◽  
Author(s):  
Naoko Inaba ◽  
Kenichiro Miura ◽  
Kenji Kawano

When tracking a moving target in the natural world with pursuit eye movement, our visual system must compensate for the self-induced retinal slip of the visual features in the background to enable us to perceive their actual motion. We previously reported that the speed of the background stimulus in space is represented by dorsal medial superior temporal (MSTd) neurons in the monkey cortex, which compensate for retinal image motion resulting from eye movements when the direction of the pursuit and background motion are parallel to the preferred direction of each neuron. To further characterize the compensation observed in the MSTd responses to the background motion, we recorded single unit activities in cortical areas middle temporal (MT) and MSTd, and we selected neurons responsive to a large-field visual stimulus. We studied their responses to the large-field stimulus in the background while monkeys pursued a moving target and while fixated a stationary target. We investigated whether compensation for retinal image motion of the background depended on the speed of pursuit. We also asked whether the directional selectivity of each neuron in relation to the external world remained the same even during pursuit and whether compensation for retinal image motion occurred irrespective of the direction of the pursuit. We found that the majority of the MSTd neurons responded to the visual motion in space by compensating for the image motion on the retina resulting from the pursuit regardless of pursuit speed and direction, whereas most of the MT neurons responded in relation to the genuine retinal image motion.


2009 ◽  
Vol 9 (1) ◽  
pp. 33-33 ◽  
Author(s):  
T. C. A. Freeman ◽  
R. A. Champion ◽  
J. H. Sumnall ◽  
R. J. Snowden

2009 ◽  
Vol 102 (6) ◽  
pp. 3225-3233 ◽  
Author(s):  
Leanne Chukoskie ◽  
J. Anthony Movshon

Retinal image motion is produced with each eye movement, yet we usually do not perceive this self-produced “reafferent” motion, nor are motion judgments much impaired when the eyes move. To understand the neural mechanisms involved in processing reafferent motion and distinguishing it from the motion of objects in the world, we studied the visual responses of single cells in middle temporal (MT) and medial superior temporal (MST) areas during steady fixation and smooth-pursuit eye movements in awake, behaving macaques. We measured neuronal responses to random-dot patterns moving at different speeds in a stimulus window that moved with the pursuit target and the eyes. This allowed us to control retinal image motion at all eye velocities. We found the expected high proportion of cells selective for the direction of visual motion. Pursuit tracking changed both response amplitude and preferred retinal speed for some cells. The changes in preferred speed were on average weakly but systematically related to the speed of pursuit for area MST cells, as would be expected if the shifts in speed selectivity were compensating for reafferent input. In area MT, speed tuning did not change systematically during pursuit. Many cells in both areas also changed response amplitude during pursuit; the most common form of modulation was response suppression when pursuit was opposite in direction to the cell's preferred direction. These results suggest that some cells in area MST encode retinal image motion veridically during eye movements, whereas others in both MT and MST contribute to the suppression of visual responses to reafferent motion.


2000 ◽  
Vol 78 (2) ◽  
pp. 131-142 ◽  
Author(s):  
James W. Ness ◽  
Harry Zwick ◽  
Bruce E. Stuck ◽  
David J. Lurid ◽  
Brian J. Lurid ◽  
...  

1990 ◽  
Vol 63 (5) ◽  
pp. 999-1009 ◽  
Author(s):  
Z. Kapoula ◽  
L. M. Optican ◽  
D. A. Robinson

1. In these experiments, postsaccadic ocular drift was induced by postsaccadic motion of the visual scene. In the most important case, the scene was moved in one eye but not the other. Six human subjects viewed the interior of a full-field hemisphere filled with a random-dot pattern. During training, eye movements were recorded by the electrooculogram. A computer detected the end of every saccade and immediately moved the pattern horizontally in the same or, in different experiments, in the opposite direction as the saccade. The pattern motion was exponential with an amplitude of 25% of the size of the antecedent saccade and a time constant of 50 ms. Before and after 3-4 h of such training, movements of both eyes were measured simultaneously by the eye coil-magnetic field method while subjects looked between stationary targets for calibration, explored the visual pattern with saccades, or made saccades in the dark to measure the effects of adaptation on postsaccadic ocular drift. The amplitude of this drift was expressed as a percentage of the size of the antecedent saccade. 2. In monocular experiments, subjects viewed the random-dot pattern with one eye. The other eye was patched. With two subjects, the pattern drifted backward in the direction opposite to the saccade; with the third, it drifted onward. The induced ocular drift was exponential, always in the direction to reduce retinal image motion, had zero latency, and persisted in the dark. After training, drift in the dark changed by 6.7% in agreement with our prior study with binocular vision, which produced a change of 6.0%. 3. In a dichoptic arrangement, one eye regarded the moveable random-dot pattern; the other, through mirrors, saw a different random-dot pattern (with similar spacing, contrast, and distance) that was stationary. These visual patterns were not fuseable and did not evoke subjective diplopia. In this case, the induced change in postsaccadic drift in the same three subjects was only 4.8%. In all cases the changes in postsaccadic drift were conjugate--they obeyed Hering's law. 4. Normal human saccades are characterized by essentially no postsaccadic drift in the abducting eye and a pronounced onward drift (approximately 4%) in the adducting eye. After training, this abduction-adduction asymmetry was preserved in the light and dark with monocular or dichoptic viewing, indicating again that all adaptive changes were conjugate. 5. When the subjects viewed the adapting stimulus after training, the zero-latency, postsaccadic drift always increased from levels in the dark.(ABSTRACT TRUNCATED AT 400 WORDS)


1981 ◽  
Vol 374 (1 Vestibular an) ◽  
pp. 312-329 ◽  
Author(s):  
H. Collewijn ◽  
A. J. Martins ◽  
R. M. Steinman

Perception ◽  
1996 ◽  
Vol 25 (7) ◽  
pp. 797-814 ◽  
Author(s):  
Michiteru Kitazaki ◽  
Shinsuke Shimojo

The generic-view principle (GVP) states that given a 2-D image the visual system interprets it as a generic view of a 3-D scene when possible. The GVP was applied to 3-D-motion perception to show how the visual system decomposes retinal image motion into three components of 3-D motion: stretch/shrinkage, rotation, and translation. First, the optical process of retinal image motion was analyzed, and predictions were made based on the GVP in the inverse-optical process. Then experiments were conducted in which the subject judged perception of stretch/shrinkage, rotation in depth, and translation in depth for a moving bar stimulus. Retinal-image parameters—2-D stretch/shrinkage, 2-D rotation, and 2-D translation—were manipulated categorically and exhaustively. The results were highly consistent with the predictions. The GVP seems to offer a broad and general framework for understanding the ambiguity-solving process in motion perception. Its relationship to other constraints such as that of rigidity is discussed.


Sign in / Sign up

Export Citation Format

Share Document