The idea that space perception involves more than eye movement signals and the position of the retinal image has come up before

1994 ◽  
Vol 17 (2) ◽  
pp. 331-332
Author(s):  
Alexander A. Skavenski
Physiology ◽  
2001 ◽  
Vol 16 (5) ◽  
pp. 234-238 ◽  
Author(s):  
Bernhard J. M. Hess

The central vestibular system receives afferent information about head position as well as rotation and translation. This information is used to prevent blurring of the retinal image but also to control self-orientation and motion in space. Vestibular signal processing in the brain stem appears to be linked to an internal model of head motion in space.


Perception ◽  
1995 ◽  
Vol 24 (9) ◽  
pp. 1075-1081 ◽  
Author(s):  
Susan J Blackmore ◽  
Gavin Brelstaff ◽  
Kay Nelson ◽  
Tom Trościanko

Our construction of a stable visual world, despite the presence of saccades, is discussed. A computer-graphics method was used to explore transsaccadic memory for complex images. Images of real-life scenes were presented under four conditions: they stayed still or moved in an unpredictable direction (forcing an eye movement), while simultaneously changing or staying the same. Changes were the appearance, disappearance, or rotation of an object in the scene. Subjects detected the changes easily when the image did not move but when it moved their performance fell to chance. A grey-out period was introduced to mimic that which occurs during a saccade. This also reduced performance but not to chance levels. These results reveal the poverty of transsaccadic memory for real-life complex scenes. They are discussed with respect to Dennett's view that much less information is available in vision than our subjective impression leads us to believe. Our stable visual world may be constructed out of a brief retinal image and a very sketchy, higher-level representation along with a pop-out mechanism to redirect attention. The richness of our visual world is, to this extent, an illusion.


1998 ◽  
Vol 10 (4) ◽  
pp. 464-471 ◽  
Author(s):  
Thomas Haarmeier ◽  
Peter Thier

It is usually held that perceptual spatial stability, despite smooth pursuit eye movements, is accomplished by comparing a signal reflecting retinal image slip with an internal reference signal, encoding the eye movement. The important consequence of this concept is that our subjective percept of visual motion reflects the outcome of this comparison rather than retinal image slip. In an attempt to localize the cortical networks underlying this comparison and therefore our subjective percept of visual motion, we exploited an imperfection inherent in it, which results in a movement illusion. If smooth pursuit is carried out across a stationary background, we perceive a tiny degree of illusionary background motion (Filehne illusion, or FI), rather than experiencing the ecologically optimal percept of stationarity. We have recently shown that this illusion can be modified substantially and predictably under laboratory conditions by visual motion unrelated to the eye movement. By making use of this finding, we were able to compare cortical potentials evoked by pursuit-induced retinal image slip under two conditions, which differed perceptually, while being identical physically. This approach allowed us to discern a pair of potentials, a parieto-occipital negativity (N300) followed by a frontal positivity (P300), whose amplitudes were solely determined by the subjective perception of visual motion irrespective of the physical attributes of the situation. This finding strongly suggests that subjective awareness of visual motion depends on neuronal activity in a parietooccipito-frontal network, which excludes the early stages of visual processing.


Perception ◽  
1993 ◽  
Vol 22 (1) ◽  
pp. 61-76 ◽  
Author(s):  
Irvin Rock ◽  
Christopher M Linnett

Although the processing of phenomenal shape might be supposed to begin at an early stage, with the shape of the retinal image of an object, it is possible that it does not begin until a later stage at which the locations of the parts of the object have been perceived. Such perceived locations are based on a compensation or constancy mechanism that takes account of eye position. Ordinarily these two possible bases of shape perception—retinal image and perceived location—are confounded. To separate them the parts of a shape were presented sequentially, during which time the eyes were in motion. The eye movement did not alter phenomenal locations of the parts vis-à-vis one another but did yield an entirely different composite retinal image of the parts. Another method employed was to change the location of each sequentially presented part with respect to a displacing frame of reference. By and large the results indicate that the composite shape perceived is based on the perceived location of the parts of the object with respect to one another, rather than on the composite retinal image.


1997 ◽  
Vol 84 (1) ◽  
pp. 107-113
Author(s):  
Shinji Nakamura

The effect of background stimulus on eye-movement information was investigated by analyzing the underestimation of the target velocity during pursuit eye movement (Aubert-Fleishl paradox). In the experiment, a striped pattern with various brightness contrasts and spatial frequencies was used as a background stimulus, which was moved at various velocities. Analysis showed that the perceived velocity of the pursuit target, which indicated the magnitudes of eye-movement information, decreased when the background stripes moved in the same direction as eye movement at higher velocities and increased when the background moved in the opposite direction. The results suggest that the eye-movement information varied as a linear function of the velocity of the motion of the background retinal image (optic flow). In addition, the effectiveness of optic flow on eye-movement information was determined by the attributes of the background stimulus such as the brightness contrast or the spatial frequency of the striped pattern.


Sign in / Sign up

Export Citation Format

Share Document