scholarly journals The time course of visual processing: Backward masking and natural scene categorisation

2005 ◽  
Vol 45 (11) ◽  
pp. 1459-1469 ◽  
Author(s):  
Nadège Bacon-Macé ◽  
Marc J.-M. Macé ◽  
Michèle Fabre-Thorpe ◽  
Simon J. Thorpe
2001 ◽  
Vol 13 (4) ◽  
pp. 454-461 ◽  
Author(s):  
Rufin VanRullen ◽  
Simon J. Thorpe

Experiments investigating the mechanisms involved in visual processing often fail to separate low-level encoding mechanisms from higher-level behaviorally relevant ones. Using an alternating dual-task event-related potential (ERP) experimental paradigm (animals or vehicles categorization) where targets of one task are intermixed among distractors of the other, we show that visual categorization of a natural scene involves different mechanisms with different time courses: a perceptual, task-independent mechanism, followed by a task-related, category-independent process. Although average ERP responses reflect the visual category of the stimulus shortly after visual processing has begun (e.g. 75-80 msec), this difference is not correlated with the subject's behavior until 150 msec poststimulus.


1999 ◽  
Vol 11 (3) ◽  
pp. 300-311 ◽  
Author(s):  
Edmund T. Rolls ◽  
Martin J. Tovée ◽  
Stefano Panzeri

Backward masking can potentially provide evidence of the time needed for visual processing, a fundamental constraint that must be incorporated into computational models of vision. Although backward masking has been extensively used psychophysically, there is little direct evidence for the effects of visual masking on neuronal responses. To investigate the effects of a backward masking paradigm on the responses of neurons in the temporal visual cortex, we have shown that the response of the neurons is interrupted by the mask. Under conditions when humans can just identify the stimulus, with stimulus onset asynchronies (SOA) of 20 msec, neurons in macaques respond to their best stimulus for approximately 30 msec. We now quantify the information that is available from the responses of single neurons under backward masking conditions when two to six faces were shown. We show that the information available is greatly decreased as the mask is brought closer to the stimulus. The decrease is more marked than the decrease in firing rate because it is the selective part of the firing that is especially attenuated by the mask, not the spontaneous firing, and also because the neuronal response is more variable at short SOAs. However, even at the shortest SOA of 20 msec, the information available is on average 0.1 bits. This compares to 0.3 bits with only the 16-msec target stimulus shown and a typical value for such neurons of 0.4 to 0.5 bits with a 500-msec stimulus. The results thus show that considerable information is available from neuronal responses even under backward masking conditions that allow the neurons to have their main response in 30 msec. This provides evidence for how rapid the processing of visual information is in a cortical area and provides a fundamental constraint for understanding how cortical information processing operates.


2012 ◽  
Vol 24 (2) ◽  
pp. 521-529 ◽  
Author(s):  
Frank Oppermann ◽  
Uwe Hassler ◽  
Jörg D. Jescheniak ◽  
Thomas Gruber

The human cognitive system is highly efficient in extracting information from our visual environment. This efficiency is based on acquired knowledge that guides our attention toward relevant events and promotes the recognition of individual objects as they appear in visual scenes. The experience-based representation of such knowledge contains not only information about the individual objects but also about relations between them, such as the typical context in which individual objects co-occur. The present EEG study aimed at exploring the availability of such relational knowledge in the time course of visual scene processing, using oscillatory evoked gamma-band responses as a neural correlate for a currently activated cortical stimulus representation. Participants decided whether two simultaneously presented objects were conceptually coherent (e.g., mouse–cheese) or not (e.g., crown–mushroom). We obtained increased evoked gamma-band responses for coherent scenes compared with incoherent scenes beginning as early as 70 msec after stimulus onset within a distributed cortical network, including the right temporal, the right frontal, and the bilateral occipital cortex. This finding provides empirical evidence for the functional importance of evoked oscillatory activity in high-level vision beyond the visual cortex and, thus, gives new insights into the functional relevance of neuronal interactions. It also indicates the very early availability of experience-based knowledge that might be regarded as a fundamental mechanism for the rapid extraction of the gist of a scene.


2009 ◽  
Vol 26 (1) ◽  
pp. 35-49 ◽  
Author(s):  
THORSTEN HANSEN ◽  
KARL R. GEGENFURTNER

AbstractForm vision is traditionally regarded as processing primarily achromatic information. Previous investigations into the statistics of color and luminance in natural scenes have claimed that luminance and chromatic edges are not independent of each other and that any chromatic edge most likely occurs together with a luminance edge of similar strength. Here we computed the joint statistics of luminance and chromatic edges in over 700 calibrated color images from natural scenes. We found that isoluminant edges exist in natural scenes and were not rarer than pure luminance edges. Most edges combined luminance and chromatic information but to varying degrees such that luminance and chromatic edges were statistically independent of each other. Independence increased along successive stages of visual processing from cones via postreceptoral color-opponent channels to edges. The results show that chromatic edge contrast is an independent source of information that can be linearly combined with other cues for the proper segmentation of objects in natural and artificial vision systems. Color vision may have evolved in response to the natural scene statistics to gain access to this independent information.


2016 ◽  
Vol 33 ◽  
Author(s):  
FILIPP SCHMIDT ◽  
ANDREAS WEBER ◽  
ANKE HABERKAMP

AbstractVisual perception is not instantaneous; the perceptual representation of our environment builds up over time. This can strongly affect our responses to visual stimuli. Here, we study the temporal dynamics of visual processing by analyzing the time course of priming effects induced by the well-known Ebbinghaus illusion. In slower responses, Ebbinghaus primes produce effects in accordance with their perceptual appearance. However, in fast responses, these effects are reversed. We argue that this dissociation originates from the difference between early feedforward-mediated gist of the scene processing and later feedback-mediated more elaborate processing. Indeed, our findings are well explained by the differences between low-frequency representations mediated by the fast magnocellular pathway and high-frequency representations mediated by the slower parvocellular pathway. Our results demonstrate the potentially dramatic effect of response speed on the perception of visual illusions specifically and on our actions in response to objects in our visual environment generally.


Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 134-134
Author(s):  
A Ehrenstein ◽  
B G Breitmeyer ◽  
K K Pritchard ◽  
M Hiscock ◽  
J Crisan

When the task is to detect two letter targets in a stream of non-letter (digit) distractors in rapid serial visual presentation, an attentional blink (AB; ie a deficit in the detection of a second target when it follows the first by approximately 100 – 500 ms) is often found to occur. In a series of four experiments with different numbers of display positions, with or without masking, we show that: (1) the AB, which occurs when all items are presented at the same display location, is reduced when targets and distractors are presented randomly dispersed over 4 or 9 adjacent locations; (2) the AB is reduced with the spatially distributed presentation even when backward masks are used in all possible stimulus locations and when the location of the next item in the sequence is predictable; (3) the AB is not due to either a location-specific forward or backward masking effect occurring at early levels in visual processing. We conclude that the AB is primarily a function of the interruption of late visual processing produced when the item following the first target occurs at the same location. It seems that, in order for the AB to occur, the item following the first target must be presented at the same location as that target so that it can serve both as a distractor and as a mask interrupting or interfering with late visual processing.


Sign in / Sign up

Export Citation Format

Share Document