The Aperture Problem

Author(s):  
Maggie Shiffrar

The accurate visual perception of an object’s motion requires the simultaneous integration of motion information arising from that object along with the segmentation of motion information from other objects. When moving objects are seen through apertures, or viewing windows, the resultant illusions highlight some of the challenges that the visual system faces as it balances motion segmentation with motion integration. One example is the barber pole Illusion, in which lines appear to translate orthogonally to their true direction of emotion. Another is the illusory perception of incoherence when simple rectilinear objects translate or rotate behind disconnected apertures. Studies of these illusions suggest that visual motion processes frequently rely on simple form cues.

2001 ◽  
Vol 13 (6) ◽  
pp. 1243-1253 ◽  
Author(s):  
Rajesh P. N. Rao ◽  
David M. Eagleman ◽  
Terrence J. Sejnowski

When a flash is aligned with a moving object, subjects perceive the flash to lag behind the moving object. Two different models have been proposed to explain this “flash-lag” effect. In the motion extrapolation model, the visual system extrapolates the location of the moving object to counteract neural propagation delays, whereas in the latency difference model, it is hypothesized that moving objects are processed and perceived more quickly than flashed objects. However, recent psychophysical experiments suggest that neither of these interpretations is feasible (Eagleman & Sejnowski, 2000a, 2000b, 2000c), hypothesizing instead that the visual system uses data from the future of an event before committing to an interpretation. We formalize this idea in terms of the statistical framework of optimal smoothing and show that a model based on smoothing accounts for the shape of psychometric curves from a flash-lag experiment involving random reversals of motion direction. The smoothing model demonstrates how the visual system may enhance perceptual accuracy by relying not only on data from the past but also on data collected from the immediate future of an event.


2010 ◽  
Vol 104 (5) ◽  
pp. 2886-2899 ◽  
Author(s):  
Thaddeus B. Czuba ◽  
Bas Rokers ◽  
Alexander C. Huk ◽  
Lawrence K. Cormack

Two binocular cues are thought to underlie the visual perception of three-dimensional (3D) motion: a disparity-based cue, which relies on changes in disparity over time, and a velocity-based cue, which relies on interocular velocity differences. The respective building blocks of these cues, instantaneous disparity and retinal motion, exhibit very distinct spatial and temporal signatures. Although these two cues are synchronous in naturally moving objects, disparity-based and velocity-based mechanisms can be dissociated experimentally. We therefore investigated how the relative contributions of these two cues change across a range of viewing conditions. We measured direction-discrimination sensitivity for motion though depth across a wide range of eccentricities and speeds for disparity-based stimuli, velocity-based stimuli, and “full cue” stimuli containing both changing disparities and interocular velocity differences. Surprisingly, the pattern of sensitivity for velocity-based stimuli was nearly identical to that for full cue stimuli across the entire extent of the measured spatiotemporal surface and both were clearly distinct from those for the disparity-based stimuli. These results suggest that for direction discrimination outside the fovea, 3D motion perception primarily relies on the velocity-based cue with little, if any, contribution from the disparity-based cue.


Perception ◽  
2018 ◽  
Vol 47 (7) ◽  
pp. 735-750 ◽  
Author(s):  
Lindsey M. Shain ◽  
J. Farley Norman

An experiment required younger and older adults to estimate coherent visual motion direction from multiple motion signals, where each motion signal was locally ambiguous with respect to the true direction of pattern motion. Thus, accurate performance required the successful integration of motion signals across space (i.e., accurate performance required solution of the aperture problem) . The observers viewed arrays of either 64 or 9 moving line segments; because these lines moved behind apertures, their individual local motions were ambiguous with respect to direction (i.e., were subject to the aperture problem). Following 2.4 seconds of pattern motion on each trial (true motion directions ranged over the entire range of 360° in the fronto-parallel plane), the observers estimated the coherent direction of motion. There was an effect of direction, such that cardinal directions of pattern motion were judged with less error than oblique directions. In addition, a large effect of aging occurred—The average absolute errors of the older observers were 46% and 30.4% higher in magnitude than those exhibited by the younger observers for the 64 and 9 aperture conditions, respectively. Finally, the observers’ precision markedly deteriorated as the number of apertures was reduced from 64 to 9.


Author(s):  
Maggie Shiffrar ◽  
Christina Joseph

The phenomenon of apparent motion, or the illusory perception of movement from rapidly displayed static images, provides an excellent platform for the study of how perceptual systems analyze input over time and space. Studies of the human body in apparent motion further suggest that the visual system is also influenced by an observer’s motor experience with his or her own body. As a result, the human visual system sometimes processes human movement differently from object movement. For example, under apparent motion conditions in which inanimate objects appear to traverse the shortest possible paths of motion, human motion instead appears to follow longer, biomechanically plausible paths of motion. Psychophysical and brain imaging studies converge in supporting the hypothesis that the visual analysis of human movement differs from the visual analysis of nonhuman movements whenever visual motion cues are consistent with an observer’s motor repertoire of possible human actions.


2008 ◽  
Vol 31 (2) ◽  
pp. 220-221 ◽  
Author(s):  
David Whitney

AbstractAccurate perception of moving objects would be useful; accurate visually guided action is crucial. Visual motion across the scene influences perceived object location and the trajectory of reaching movements to objects. In this commentary, I propose that the visual system assigns the position of any object based on the predominant motion present in the scene, and that this is used to guide reaching movements to compensate for delays in visuomotor processing.


1991 ◽  
Vol 66 (3) ◽  
pp. 651-673 ◽  
Author(s):  
D. S. Yamasaki ◽  
R. H. Wurtz

1. Ibotenic acid lesions in the monkey's middle temporal area (MT) and the medial superior temporal area (MST) in the superior temporal sulcus (STS) have previously been shown to produce a deficit in initiation of smooth-pursuit eye movements to moving visual targets. The deficits, however, recovery within a few days. In the present experiments we investigated the factors that influence that recovery. 2. We tested two aspects of the monkey's ability to use motion information to acquire moving targets. We used eye-position error as a measure of the monkey's ability to make accurate initial saccades to the moving target. We measured eye speed within the first 100 ms after the saccade to evaluate the monkey's initial smooth pursuit. 3. We determined that pursuit recovery was not dependent specifically on the use of neurotoxic lesions. Although the rate of recovery was slightly altered by replacing the usual neurotoxin (ibotenic acid) with another neurotoxin [alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA)] or with an electrolytic lesion, pursuit recovery still occurred within a period of days to weeks. 4. There was a relationship between the size and location of the lesion and the recovery time. The time to recovery for eye-position error and initial eye speed increased with the fraction of MT removed. Whether the rate of recovery and size of lesions within regions on the anterior bank were related was unresolved. 5. We found that a large AMPA lesion within the STS that removed all of MT and nearly all of MST drastically altered the rate of recovery. Recovery was incomplete more than 7 mo after the lesion. Even with this lesion, however, the monkey's ability to use motion information for pursuit was not completely eliminated. 6. The large lesion also included parts of areas V1, V2, V3, and V4, but analysis of the visual fields associated with this lesion indicated that these areas probably did not have a substantial effect on recovery. 7. We tested whether visual motion experience of the monkey after a lesion was necessary for recovery by limiting the monkey's experience either by using a mask or by using 4-Hz stroboscopic illumination. In one monkey the eye-position error component of pursuit was prolonged to greater than 2 wk, but recovery of eye speed was not. Reduced motion experience had little effect on recovery in the other two monkeys. These results suggest that such visual motion experience is not necessary for the recovery of pursuit.(ABSTRACT TRUNCATED AT 400 WORDS)


2012 ◽  
Vol 5 (1) ◽  
pp. 1-10
Author(s):  
Mateusz Woźniak

Brain system responsible for visual perception has been extensively studied. Visual system analyses a wide variety of stimuli in order to let us create adaptive representation of surrounding world. But among vast amounts of processed information come visual cues describing our own bodies. These cues constitute our so-called body-image. We tend to perceive it as a relatively stable structure but recent research, especially within the domain of virtual reality, introduces doubts to this assumption. New problems appear concerning perceiving others’ and our own bodies in virtual space and how does it influence our experience of ourselves and true reality. Recent studies show that how we see our avatars influence how we behave in artificial worlds. It introduces a brand new way of thinking about human embodiment. Virtual reality allows us to transcend beyond the casual visual-sensory-motor integration and create new ways to experience embodiment, temporarily replacing permanent body image with almost any imaginable digital one. Santrauka Smegenų sistema, atsakinga už vizualųjį suvokimą, yra nuodugniai ištirta. Vizualioji sistema analizuoja plačią akstinų įvairovę, padedančią mums sukurti adaptuotą supančio pasaulio reprezentaciją. Tačiau tarp didelio kiekio apdorotos informacijos kyla vizualiosios užuominos, atvaizduojančios mūsų pačių kūnus. Šios užuominos steigia vadinamąjį kūną-atvaizdą. Mes linkstame jį suvokti kaip sąlygiškai stabilią struktūrą, tačiau dabartiniai tyrimai, o ypač tie, kurie vykdomi virtualiojoje realybėje, tokia prielaida verčia suabejoti. Kyla naujų problemų, suvokiant kitų ir mūsų pačių kūnus virtualiojoje erdvėje bei kokios įtakos tai turi mūsų pačių savęs ir tikrosios realybės patyrimui. Nūdieniai tyrinėjimai atskleidžia, kad tai, kaip mes suvokiame savąjį kūniškumą, turi įtakos tam, kaip elgiamės dirbtiniuose pasauliuose. Tai steigia visiškai naują žmogiškojo kūniškumo suvokimo būdą. Virtualioji realybė leidžia mums peržengti paprastą vizualinęjutiminę-motorinę integraciją ir kurti naujus būdus patirti kūniškumą, palaipsniui pakeičiant ilgalaikį kūno atvaizdą bet kokiu įsivaizduojamu skaitmeniniu.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 132-132
Author(s):  
S Edelman ◽  
S Duvdevani-Bar

To recognise a previously seen object, the visual system must overcome the variability in the object's appearance caused by factors such as illumination and pose. It is possible to counter the influence of these factors, by learning to interpolate between stored views of the target object, taken under representative combinations of viewing conditions. Routine visual tasks, however, typically require not so much recognition as categorisation, that is making sense of objects not seen before. Despite persistent practical difficulties, theorists in computer vision and visual perception traditionally favour the structural route to categorisation, according to which forming a description of a novel shape in terms of its parts and their spatial relationships is a prerequisite to the ability to categorise it. In comparison, we demonstrate that knowledge of instances of each of several representative categories can provide the necessary computational substrate for the categorisation of their new instances, as well as for representation and processing of radically novel shapes, not belonging to any of the familiar categories. The representational scheme underlying this approach, according to which objects are encoded by their similarities to entire reference shapes (S Edelman, 1997 Behavioral and Brain Sciences in press), is computationally viable, and is readily mapped onto the mechanisms of biological vision revealed by recent psychophysical and physiological studies.


Sign in / Sign up

Export Citation Format

Share Document