Motion extrapolation in the flash-lag effect depends on perceived, rather than physical speed

2021 ◽  
Author(s):  
Jane Yook ◽  
Lysha Lee ◽  
Simone Vossel ◽  
Ralph Weidner ◽  
Hinze Hogendoorn

In the flash-lag effect (FLE), a flash in spatiotemporal alignment with a moving object is often misperceived as lagging behind the moving object. One proposed explanation for the illusion is based on predictive motion extrapolation of trajectories. In this interpretation, observers require an estimate of the object′s velocity to anticipate future positions, implying that the FLE is dependent on a neural representation of perceived velocity. By contrast, alternative models of the FLE based on differential latencies or temporal averaging should not rely on such a representation of velocity. Here, we test the extrapolation account by investigating whether the FLE is sensitive to illusory changes in perceived speed when physical speed is actually constant. This was tested using rotational wedge stimuli with variable noise texture (Experiment 1) and luminance contrast (Experiment 2). We show for both manipulations, differences in perceived speed corresponded to differences in the FLE: dynamic versus static noise, and low versus high contrast stimuli led to increases in perceived speed and FLE magnitudes. These effects were consistent across different textures and were not due to low-level factors. Our results support the idea that the FLE depends on a neural representation of velocity, which is consistent with mechanisms of motion extrapolation. Hence, the faster the perceived speed, the larger the extrapolation, the stronger the flash-lag.

2019 ◽  
Author(s):  
Steven Wiesner ◽  
Ian W. Baumgart ◽  
Xin Huang

ABSTRACTNatural scenes often contain multiple objects and surfaces. However, how neurons in the visual cortex represent multiple visual stimuli is not well understood. Previous studies have shown that, when multiple stimuli compete in one feature domain, the evoked neuronal response is biased toward the stimulus that has a stronger signal strength. Here we investigate how neurons in the middle temporal (MT) cortex of macaques represent multiple stimuli that compete in more than one feature domain. Visual stimuli were two random-dot patches moving in different directions. One stimulus had low luminance contrast and moved with high coherence, whereas the other had high contrast and moved with low coherence. We found that how MT neurons represent multiple stimuli depended on the spatial arrangement of the stimuli. When two stimuli were overlapping, MT responses were dominated by the stimulus component that had high contrast. When two stimuli were spatially separated within the receptive fields, the contrast dominance was abolished. We found the same results when using contrast to compete with motion speed. Our neural data and computer simulations using a V1-MT model suggest that the contrast dominance found with overlapping stimuli is due to normalization occurring at an input stage fed to MT, and MT neurons cannot overturn this bias based on their own feature selectivity. The interaction between spatially separated stimuli can largely be explained by normalization within MT. Our results revealed new rules on stimulus competition and highlighted the impact of hierarchical processing on representing multiple stimuli in the visual cortex.SIGNIFICANCE STATEMENTPrevious studies have shown that the neural representation of multiple visual stimuli can be accounted for by a divisive normalization model. By using multiple stimuli that compete in more than one feature domain, we found that luminance contrast has a dominant effect in determining competition between multiple stimuli when they were overlapping but not spatially separated. Our results revealed that neuronal responses to multiple stimuli in a given cortical area cannot be simply predicted by the population neural responses elicited in that area by the individual stimulus components. To understand the neural representation of multiple stimuli, rather than considering response normalization only within the area of interest, one must consider the computations including normalization occurring along the hierarchical visual pathway.


2008 ◽  
Vol 19 (11) ◽  
pp. 1087-1091 ◽  
Author(s):  
Gerrit W. Maus ◽  
Romi Nijhawan

The flash-lag effect, in which a moving object is perceived ahead of a colocalized flash, has led to keen empirical and theoretical debates. To test the proposal that a predictive mechanism overcomes neural delays in vision by shifting objects spatially, we asked observers to judge the final position of a bar moving into the retinal blind spot. The bar was perceived to disappear in positions well inside the unstimulated area. Given that photoreceptors are absent in the blind spot, the perceived shift must be based on the history of the moving object. Such predictive overshoots are suppressed when a moving object disappears abruptly from the retina, triggering retinal transient signals. No such transient-driven suppression occurs when the object disappears by virtue of moving into the blind spot. The extrapolated position of the moving bar revealed in this manner provides converging support for visual prediction.


1968 ◽  
Vol 26 (2) ◽  
pp. 407-416 ◽  
Author(s):  
Horace N. Reynolds

Ss were shown a rectangular object which moved transversely across their field of view and passed behind an opaque screen. The purpose was to investigate some of the factors affecting estimates of the time required for the occluded moving object to travel a given distance behind the screen. The factors selected for study were (1) method of viewing the moving object (pursuit, static fixation), (2) background structure (homogeneous, textured), and (3) object size. According to previous studies, these variables affect the perceived speed of a moving object and might therefore be expected to affect estimates of the duration of occluded traversal. The results did not show statistically significant differences among experimental groups, although data trends are discussed. An additional finding was that Ss significantly overestimated the duration of occluded traversal, consistent with a tendency to overestimate traversal distance. The experiment is related to Michotte's studies of “amodal perception” and discussed in terms of Gibson's stimulus information approach to perception.


2009 ◽  
Vol 71 (6) ◽  
pp. 1313-1324 ◽  
Author(s):  
Stefanie I. Becker ◽  
Ulrich Ansorge ◽  
Massimo Turatto
Keyword(s):  

2008 ◽  
Vol 276 (1657) ◽  
pp. 781-786 ◽  
Author(s):  
Martin Stevens ◽  
Isabel S Winney ◽  
Abi Cantor ◽  
Julia Graham

Camouflage is an important strategy in animals to prevent predation. This includes disruptive coloration, where high-contrast markings placed at an animal's edge break up the true body shape. Successful disruption may also involve non-marginal markings found away from the body outline that create ‘false edges’ more salient than the true body form (‘surface disruption’). However, previous work has focused on breaking up the true body outline, not on surface disruption. Furthermore, while high contrast may enhance disruption, it is untested where on the body different contrasts should be placed for maximum effect. We used artificial prey presented to wild avian predators in the field, to determine the effectiveness of surface disruption, and of different luminance contrast placed in different prey locations. Disruptive coloration was no more effective when comprising high luminance contrast per se , but its effectiveness was dramatically increased with high-contrast markings placed away from the body outline, creating effective surface disruption. A model of avian visual edge processing showed that surface disruption does not make object detection more difficult simply by creating false edges away from the true body outline, but its effect may also be based on a different visual mechanism. Our study has implications for whether animals can combine disruptive coloration with other ‘conspicuous’ signalling strategies.


Perception ◽  
1996 ◽  
Vol 25 (5) ◽  
pp. 583-590 ◽  
Author(s):  
Jeroen B J Smeets ◽  
Eli Brenner ◽  
Sonia Trébuchet ◽  
Daniel R Mestre

An investigation was undertaken into whether judgments of time-to-contact between a laterally moving object and a bar are based on the direct perception of an optical variable (tau), or on the ratio between the perceived distance and perceived velocity of the object. A moving background was used to induce changes in the perceived velocities without changing the optical variables that specify time-to-contact. Background motion induced large systematic errors in the estimated time-to-contact. It is concluded that the judgment of time-to-contact is primarily based on the ratio between the perceived distance and the perceived velocity, and not on tau.


2001 ◽  
Vol 13 (6) ◽  
pp. 1243-1253 ◽  
Author(s):  
Rajesh P. N. Rao ◽  
David M. Eagleman ◽  
Terrence J. Sejnowski

When a flash is aligned with a moving object, subjects perceive the flash to lag behind the moving object. Two different models have been proposed to explain this “flash-lag” effect. In the motion extrapolation model, the visual system extrapolates the location of the moving object to counteract neural propagation delays, whereas in the latency difference model, it is hypothesized that moving objects are processed and perceived more quickly than flashed objects. However, recent psychophysical experiments suggest that neither of these interpretations is feasible (Eagleman & Sejnowski, 2000a, 2000b, 2000c), hypothesizing instead that the visual system uses data from the future of an event before committing to an interpretation. We formalize this idea in terms of the statistical framework of optimal smoothing and show that a model based on smoothing accounts for the shape of psychometric curves from a flash-lag experiment involving random reversals of motion direction. The smoothing model demonstrates how the visual system may enhance perceptual accuracy by relying not only on data from the past but also on data collected from the immediate future of an event.


Author(s):  
Katherine L. Hermann ◽  
Shridhar R. Singh ◽  
Isabelle A. Rosenthal ◽  
Dimitrios Pantazis ◽  
Bevil R. Conway

Hue and luminance contrast are the most basic visual features, emerging in early layers of convolutional neural networks trained to perform object categorization. In human vision, the timing of the neural computations that extract these features, and the extent to which they are determined by the same or separate neural circuits, is unknown. We addressed these questions using multivariate analyses of human brain responses measured with magnetoencephalography. We report four discoveries. First, it was possible to decode hue tolerant to changes in luminance contrast, and luminance contrast tolerant to changes in hue, consistent with the existence of separable neural mechanisms for these features. Second, the decoding time course for luminance contrast peaked 16-24 ms before hue and showed a more prominent secondary peak corresponding to decoding of stimulus cessation. These results support the idea that the brain uses luminance contrast as an updating signal to separate events within the constant stream of visual information. Third, neural representations of hue generalized to a greater extent across time, providing a neural correlate of the preeminence of hue over luminance contrast in perceptual grouping and memory. Finally, decoding of luminance contrast was more variable across participants for hues associated with daylight (orange and blue) than for anti-daylight (green and pink), suggesting that color-constancy mechanisms reflect individual differences in assumptions about natural lighting.


i-Perception ◽  
10.1068/id248 ◽  
2012 ◽  
Vol 3 (4) ◽  
pp. 248-248
Author(s):  
Kevin R Brooks ◽  
Kirsten L Challinor

2010 ◽  
Vol 27 (1-2) ◽  
pp. 43-55 ◽  
Author(s):  
MICHAEL L. RISNER ◽  
FRANKLIN R. AMTHOR ◽  
TIMOTHY J. GAWNE

AbstractRetinal ganglion cells (RGCs) are highly sensitive to changes in contrast, which is crucial for the detection of edges in a visual scene. However, in the natural environment, edges do not just vary in contrast, but edges also vary in the degree of blur, which can be caused by distance from the plane of fixation, motion, and shadows. Hence, blur is as much a characteristic of an edge as luminance contrast, yet its effects on the responses of RGCs are largely unexplored.We examined the responses of rabbit RGCs to sharp edges varying by contrast and also to high-contrast edges varying by blur. The width of the blur profile ranged from 0.73 to 13.05 deg of visual angle. For most RGCs, blurring a high-contrast edge produced the same pattern of reduction of response strength and increase in latency as decreasing the contrast of a sharp edge. In support of this, we found a significant correlation between the amount of blur required to reduce the response by 50% and the size of the receptive fields, suggesting that blur may operate by reducing the range of luminance values within the receptive field. These RGCs cannot individually encode for blur, and blur could only be estimated by comparing the responses of populations of neurons with different receptive field sizes. However, some RGCs showed a different pattern of changes in latency and magnitude with changes in contrast and blur; these neurons could encode blur directly.We also tested whether the response of a RGC to a blurred edge was linear, that is, whether the response of a neuron to a sharp edge was equal to the response to a blurred edge plus the response to the missing spatial components that were the difference between a sharp and blurred edge. Brisk-sustained cells were more linear; however, brisk-transient cells exhibited both linear and nonlinear behavior.


Sign in / Sign up

Export Citation Format

Share Document