Perceptual stability and eye movements

Author(s):  
Nicholas J. Wade ◽  
Benjamin W. Tatler
2010 ◽  
Vol 10 (7) ◽  
pp. 518-518
Author(s):  
F. Ostendorf ◽  
J. Kilias ◽  
C. Ploner

NeuroImage ◽  
2001 ◽  
Vol 14 (1) ◽  
pp. S33-S39 ◽  
Author(s):  
Peter Thier ◽  
Thomas Haarmeier ◽  
Subhojit Chakraborty ◽  
Axel Lindner ◽  
Alexander Tikhonov

2008 ◽  
Vol 29 (3) ◽  
pp. 300-311 ◽  
Author(s):  
Maja U. Trenner ◽  
Manfred Fahle ◽  
Oliver Fasold ◽  
Hauke R. Heekeren ◽  
Arno Villringer ◽  
...  

2017 ◽  
Vol 117 (2) ◽  
pp. 808-817 ◽  
Author(s):  
Kyriaki Mikellidou ◽  
Marco Turi ◽  
David C. Burr

Humans maintain a stable representation of the visual world effortlessly, despite constant movements of the eyes, head, and body, across multiple planes. Whereas visual stability in the face of saccadic eye movements has been intensely researched, fewer studies have investigated retinal image transformations induced by head movements, especially in the frontal plane. Unlike head rotations in the horizontal and sagittal planes, tilting the head in the frontal plane is only partially counteracted by torsional eye movements and consequently induces a distortion of the retinal image to which we seem to be completely oblivious. One possible mechanism aiding perceptual stability is an active reconstruction of a spatiotopic map of the visual world, anchored in allocentric coordinates. To explore this possibility, we measured the positional motion aftereffect (PMAE; the apparent change in position after adaptation to motion) with head tilts of ∼42° between adaptation and test (to dissociate retinal from allocentric coordinates). The aftereffect was shown to have both a retinotopic and spatiotopic component. When tested with unpatterned Gaussian blobs rather than sinusoidal grating stimuli, the retinotopic component was greatly reduced, whereas the spatiotopic component remained. The results suggest that perceptual stability may be maintained at least partially through mechanisms involving spatiotopic coding.NEW & NOTEWORTHY Given that spatiotopic coding could play a key role in maintaining visual stability, we look for evidence of spatiotopic coding after retinal image transformations caused by head tilt. To this end, we measure the strength of the positional motion aftereffect (PMAE; previously shown to be largely spatiotopic after saccades) after large head tilts. We find that, as with eye movements, the spatial selectivity of the PMAE has a large spatiotopic component after head rotation.


2008 ◽  
Vol 99 (5) ◽  
pp. 2470-2478 ◽  
Author(s):  
André Kaminiarz ◽  
Bart Krekelberg ◽  
Frank Bremmer

The mechanisms underlying visual perceptual stability are usually investigated using voluntary eye movements. In such studies, errors in perceptual stability during saccades and pursuit are commonly interpreted as mismatches between actual eye position and eye-position signals in the brain. The generality of this interpretation could in principle be tested by investigating spatial localization during reflexive eye movements whose kinematics are very similar to those of voluntary eye movements. Accordingly, in this study, we determined mislocalization of flashed visual targets during optokinetic afternystagmus (OKAN). These eye movements are quite unique in that they occur in complete darkness and are generated by subcortical control mechanisms. We found that during horizontal OKAN slow phases, subjects mislocalize targets away from the fovea in the horizontal direction. This corresponds to a perceived expansion of visual space and is unlike mislocalization found for any other voluntary or reflexive eye movement. Around the OKAN fast phases, we found a bias in the direction of the fast phase prior to its onset and opposite to the fast-phase direction thereafter. Such a biphasic modulation has also been reported in the temporal vicinity of saccades and during optokinetic nystagmus (OKN). A direct comparison, however, showed that the modulation during OKAN was much larger and occurred earlier relative to fast-phase onset than during OKN. A simple mismatch between the current eye position and the eye-position signal in the brain is unlikely to explain such disparate results across similar eye movements. Instead, these data support the view that mislocalization arises from errors in eye-centered position information.


2005 ◽  
Vol 94 (5) ◽  
pp. 3249-3258 ◽  
Author(s):  
Laura M. Heiser ◽  
Rebecca A. Berman ◽  
Richard C. Saunders ◽  
Carol L. Colby

With each eye movement, a new image impinges on the retina, yet we do not notice any shift in visual perception. This perceptual stability indicates that the brain must be able to update visual representations to take our eye movements into account. Neurons in the lateral intraparietal area (LIP) update visual representations when the eyes move. The circuitry that supports these updated representations remains unknown, however. In this experiment, we asked whether the forebrain commissures are necessary for updating in area LIP when stimulus representations must be updated from one visual hemifield to the other. We addressed this question by recording from LIP neurons in split-brain monkeys during two conditions: stimulus traces were updated either across or within hemifields. Our expectation was that across-hemifield updating activity in LIP would be reduced or abolished after transection of the forebrain commissures. Our principal finding is that LIP neurons can update stimulus traces from one hemifield to the other even in the absence of the forebrain commissures. This finding provides the first evidence that representations in parietal cortex can be updated without the use of direct cortico-cortical links. The second main finding is that updating activity in LIP is modified in the split-brain monkey: across-hemifield signals are reduced in magnitude and delayed in onset compared with within-hemifield signals, which indicates that the pathways for across-hemifield updating are less effective in the absence of the forebrain commissures. Together these findings reveal a dynamic circuit that contributes to updating spatial representations.


1994 ◽  
Vol 17 (2) ◽  
pp. 274-275
Author(s):  
Claude Prablanc

The question of how the brain can construct a stable representation of the external world despite eye movements is a very old one. If there have been some wrong statements of problems (such as the inverted retinal image), other statements are less naive and have led to analytic solutions possibly adopted by the brain to counteract the spurious effects of eye movements. Following the MacKay (1973) objections to the analytic view of perceptual stability, Bridgeman et al. claim that the idea that signals canceling the effects of saccadic eye movements are needed is also a misconception, as is the claim that stability and position encoding are two distinct problems. It must be remembered, however, that what made the theory of “cancellation” formulated by von Holst and Mittelstaedt (1950) so appealing was the clinical observation of perceptual instability following ocular paralysis. Following the concept of corollary discharge, the theory of efference copy had the advantage of simultaneously solving three problems: the stability of the visual world during the saccade, the same visual stability across saccades, and the visual constancy problem of allowing the subject to know where an object in space is.


Sign in / Sign up

Export Citation Format

Share Document