scholarly journals Spatiotopic coding during dynamic head tilt

2017 ◽  
Vol 117 (2) ◽  
pp. 808-817 ◽  
Author(s):  
Kyriaki Mikellidou ◽  
Marco Turi ◽  
David C. Burr

Humans maintain a stable representation of the visual world effortlessly, despite constant movements of the eyes, head, and body, across multiple planes. Whereas visual stability in the face of saccadic eye movements has been intensely researched, fewer studies have investigated retinal image transformations induced by head movements, especially in the frontal plane. Unlike head rotations in the horizontal and sagittal planes, tilting the head in the frontal plane is only partially counteracted by torsional eye movements and consequently induces a distortion of the retinal image to which we seem to be completely oblivious. One possible mechanism aiding perceptual stability is an active reconstruction of a spatiotopic map of the visual world, anchored in allocentric coordinates. To explore this possibility, we measured the positional motion aftereffect (PMAE; the apparent change in position after adaptation to motion) with head tilts of ∼42° between adaptation and test (to dissociate retinal from allocentric coordinates). The aftereffect was shown to have both a retinotopic and spatiotopic component. When tested with unpatterned Gaussian blobs rather than sinusoidal grating stimuli, the retinotopic component was greatly reduced, whereas the spatiotopic component remained. The results suggest that perceptual stability may be maintained at least partially through mechanisms involving spatiotopic coding.NEW & NOTEWORTHY Given that spatiotopic coding could play a key role in maintaining visual stability, we look for evidence of spatiotopic coding after retinal image transformations caused by head tilt. To this end, we measure the strength of the positional motion aftereffect (PMAE; previously shown to be largely spatiotopic after saccades) after large head tilts. We find that, as with eye movements, the spatial selectivity of the PMAE has a large spatiotopic component after head rotation.

2006 ◽  
Vol 95 (3) ◽  
pp. 1936-1948 ◽  
Author(s):  
Ronald G. Kaptein ◽  
Jan A.M. Van Gisbergen

Using vestibular sensors to maintain visual stability during changes in head tilt, crucial when panoramic cues are not available, presents a computational challenge. Reliance on the otoliths requires a neural strategy for resolving their tilt/translation ambiguity, such as canal–otolith interaction or frequency segregation. The canal signal is subject to bandwidth limitations. In this study, we assessed the relative contribution of canal and otolith signals and investigated how they might be processed and combined. The experimental approach was to explore conditions with and without otolith contributions in a frequency range with various degrees of canal activation. We tested the perceptual stability of visual line orientation in six human subjects during passive sinusoidal roll tilt in the dark at frequencies from 0.05 to 0.4 Hz (30° peak to peak). Because subjects were constantly monitoring spatial motion of a visual line in the frontal plane, the paradigm required moment-to-moment updating for ongoing ego motion. Their task was to judge the total spatial sway of the line when it rotated sinusoidally at various amplitudes. From the responses we determined how the line had to be rotated to be perceived as stable in space. Tests were taken both with (subject upright) and without (subject supine) gravity cues. Analysis of these data showed that the compensation for body rotation in the computation of line orientation in space, although always incomplete, depended on vestibular rotation frequency and on the availability of gravity cues. In the supine condition, the compensation for ego motion showed a steep increase with frequency, compatible with an integrated canal signal. The improvement of performance in the upright condition, afforded by graviceptive cues from the otoliths, showed low-pass characteristics. Simulations showed that a linear combination of an integrated canal signal and a gravity-based signal can account for these results.


1994 ◽  
Vol 17 (2) ◽  
pp. 274-275
Author(s):  
Claude Prablanc

The question of how the brain can construct a stable representation of the external world despite eye movements is a very old one. If there have been some wrong statements of problems (such as the inverted retinal image), other statements are less naive and have led to analytic solutions possibly adopted by the brain to counteract the spurious effects of eye movements. Following the MacKay (1973) objections to the analytic view of perceptual stability, Bridgeman et al. claim that the idea that signals canceling the effects of saccadic eye movements are needed is also a misconception, as is the claim that stability and position encoding are two distinct problems. It must be remembered, however, that what made the theory of “cancellation” formulated by von Holst and Mittelstaedt (1950) so appealing was the clinical observation of perceptual instability following ocular paralysis. Following the concept of corollary discharge, the theory of efference copy had the advantage of simultaneously solving three problems: the stability of the visual world during the saccade, the same visual stability across saccades, and the visual constancy problem of allowing the subject to know where an object in space is.


1994 ◽  
Vol 17 (2) ◽  
pp. 247-258 ◽  
Author(s):  
Bruce Bridgeman ◽  
A. H. C. Van der Heijden ◽  
Boris M. Velichkovsky

AbstractWe identify two aspects of the problem of maintaining perceptual stability despite an observer's eye movements. The first, visual direction constancy, is the (egocentric) stability of apparent positions of objects in the visual world relative to the perceiver. The second, visual position constancy, is the (exocentric) stability of positions of objects relative to each other. We analyze the constancy of visual direction despite saccadic eye movements.Three information sources have been proposed to enable the visual system to achieve stability: the structure of the visual field, proprioceptive inflow, and a copy of neural efference or outflow to the extraocular muscles. None of these sources by itself provides adequate information to achieve visual direction constancy; present evidence indicates that all three are used.Our final question concerns how information processing operations result in a stable world. The three traditionally suggested means have been elimination, translation, or evaluation. All are rejected. From a review of the physiological and psychological evidence we conclude that no subtraction, compensation, or evaluation need take place. The problem for which these solutions were developed turns out to be a false one. We propose a “calibration” solution: correct spatiotopic positions are calculated anew for each fixation. Inflow, outflow, and retinal sources are used in this calculation: saccadic suppression of displacement bridges the errors between these sources and the actual extent of movement.


2011 ◽  
Vol 105 (1) ◽  
pp. 1-3 ◽  
Author(s):  
Martin Rolfs ◽  
Sven Ohl

Miniature eye movements jitter the retinal image unceasingly, raising the question of how perceptual continuity is achieved during visual fixation. Recent work discovered suppression of visual bursts in the superior colliculus around the time of microsaccades, tiny jerks of the eyes that support visual perception while gaze is fixed. This finding suggests that corollary discharge, supporting visual stability when rapid eye movements drastically shift the retinal image, may also exist for the smallest saccades.


2010 ◽  
Vol 10 (7) ◽  
pp. 518-518
Author(s):  
F. Ostendorf ◽  
J. Kilias ◽  
C. Ploner

Author(s):  
Michael K. Tanenhaus

Recently, eye movements have become a widely used response measure for studying spoken language processing in both adults and children, in situations where participants comprehend and generate utterances about a circumscribed “Visual World” while fixation is monitored, typically using a free-view eye-tracker. Psycholinguists now use the Visual World eye-movement method to study both language production and language comprehension, in studies that run the gamut of current topics in language processing. Eye movements are a response measure of choice for addressing many classic questions about spoken language processing in psycholinguistics. This article reviews the burgeoning Visual World literature on language comprehension, highlighting some of the seminal studies and examining how the Visual World approach has contributed new insights to our understanding of spoken word recognition, parsing, reference resolution, and interactive conversation. It considers some of the methodological issues that come to the fore when psycholinguists use eye movements to examine spoken language comprehension.


Author(s):  
Nicholas J. Wade ◽  
Benjamin W. Tatler

Sign in / Sign up

Export Citation Format

Share Document