scholarly journals Calibration of retinal image size with distance in the Mongolian gerbil: Rapid adjustment of calibrations in different contexts

1991 ◽  
Vol 49 (1) ◽  
pp. 38-42 ◽  
Author(s):  
Colin G. Ellard ◽  
Darlene G. Chapman ◽  
Karen A. Cameron
1980 ◽  
Vol 51 (3_suppl2) ◽  
pp. 1307-1330
Author(s):  
Willard L. Brigner

A model for the determination of retinal-image size is presented. The size-analysis is based upon the range of orientation detectors activated by a stimulus. The model is applied to size aftereffects and is also used to predict changes in perceived size in configurations which may be expected to affect the range of orientation detectors activated. The relevance of the model to illusions of direction and the perceived length of lines forming angles is also discussed.


2019 ◽  
Author(s):  
Akihito Maruya ◽  
Qasim Zaidi

AbstractJudging poses, sizes and shapes of objects accurately is necessary for organisms and machines to operate successfully in the world. Retinal images of 3D objects are mapped by the rules of projective geometry, and preserve the invariants of that geometry. Since Plato, it has been debated whether geometry is innate to the human brain, and Poincare and Einstein thought it worth examining whether formal geometry arises from experience with the world. We examine if humans have learned to exploit projective geometry to estimate sizes and shapes of objects in 3D scenes.Numerous studies have examined size invariance as a function of physical distance, which changes scale on the retina, but surprisingly, possible constancy or inconstancy of relative size seems not to have been investigated for object pose, which changes retinal image size differently along different axes. We show systematic underestimation of length for extents pointing towards or away from the observer, both for static objects and dynamically rotating objects. Observers do correct for projected shortening according to the optimal back-transform, obtained by inverting the projection function, but the correction is inadequate by a multiplicative factor. The clue is provided by the greater underestimation for longer objects, and the observation that they appear more slanted towards the observer. Adding a multiplicative factor for perceived slant in the back-transform model provides good fits to the corrections used by observers. We quantify the slant illusion with relative slant measurements, and use a dynamic demonstration to show the power of the slant illusion.In biological and mechanical objects, distortions of shape are manifold, and changes in aspect ratio and relative limb sizes are functionally important. Our model shows that observers try to retain invariance of these aspects of shape to 3D rotation by correcting retinal image distortions due to perspective projection, but the corrections can fall short. We discuss how these results imply that humans have internalized particular aspects of projective geometry through evolution or learning, and how assuming that images are preserving the continuity, collinearity, and convergence invariances of projective geometry, supplements the Generic Viewpoint assumption, and simply explains other illusions, such as Ames’ Chair.


2018 ◽  
Author(s):  
Juan Chen ◽  
Irene Sperandio ◽  
Molly J. Henry ◽  
Melvyn A Goodale

AbstractOur visual system affords a distance-invariant percept of object size by integrating retinal image size with viewing distance (size constancy). Single-unit studies with animals have shown that real changes in distance can modulate the firing rate of neurons in primary visual cortex and even subcortical structures, which raises an intriguing possibility that the required integration for size constancy may occur in the initial visual processing in V1 or even earlier. In humans, however, EEG and brain imaging studies have typically manipulated the apparent (not real) distance of stimuli using pictorial illusions, in which the cues to distance are sparse and not congruent. Here, we physically moved the monitor to different distances from the observer, a more ecologically valid paradigm that emulates what happens in everyday life. Using this paradigm in combination with electroencephalography (EEG), we were able for the first time to examine how the computation of size constancy unfolds in real time under real-world viewing conditions. We showed that even when all distance cues were available and congruent, size constancy took about 150 ms to emerge in the activity of visual cortex. The 150-ms interval exceeds the time required for the visual signals to reach V1, but is consistent with the time typically associated with later processing within V1 or recurrent processing from higher-level visual areas. Therefore, this finding provides unequivocal evidence that size constancy does not occur during the initial signal processing in V1 or earlier, but requires subsequent processing, just like any other feature binding mechanisms.


2006 ◽  
Vol 85 (1) ◽  
pp. 92-98 ◽  
Author(s):  
Achim Langenbucher ◽  
Anja Viestenz ◽  
Berthold Seitz ◽  
Holger Brünner

2000 ◽  
Author(s):  
Joseph A. Zuclich ◽  
David J. Lund ◽  
Peter R. Edsall ◽  
Richard C. Hollins ◽  
Peter A. Smith ◽  
...  

Perception ◽  
1996 ◽  
Vol 25 (1_suppl) ◽  
pp. 23-23
Author(s):  
E Brenner ◽  
J B J Smeets ◽  
W J van Damme

Because of the inverse relationship between an object's distance and its retinal image size, visual judgments of size require information on distance. Holding an object can obviously influence where one considers it to be. Does kinesthetic information on the posture of one's arm influence visual judgments of the object's size? Subjects were given a 5 cm cube at which they were asked to look before the experiment started, and to hold under the table in their left hand during the experiment. In their right hand, they held a rod behind a mirror. A simulated cube was presented binocularly—at the tip of the rod—via the mirror. Each presentation started with the subject placing the rod somewhere on a surface behind the mirror. The simulated cube appeared at that position (or 2.5 cm closer or further away) for 4 s, after which the subject had to indicate whether the cube he/she had seen was larger, the same, or smaller than the reference. The size of the simulated cube was varied between trials. Whether the simulated cube was closer, at the same position, or further than the rod influenced the point of subjective equality (the size of the simulation at which subjects judged it to match the reference). However, the average distance between the subject and the simulation was also different. When the latter differences were taken into account (by selecting data with the same average distance between the subject and the simulation) the abovementioned influence of the distance between the rod and the simulated cube disappeared.


Sign in / Sign up

Export Citation Format

Share Document