scholarly journals Causal neural mechanisms of context-based object recognition

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Miles Wischnewski ◽  
Marius V Peelen

Objects can be recognized based on their intrinsic features, including shape, color, and texture. In daily life, however, such features are often not clearly visible, for example when objects appear in the periphery, in clutter, or at a distance. Interestingly, object recognition can still be highly accurate under these conditions when objects are seen within their typical scene context. What are the neural mechanisms of context-based object recognition? According to parallel processing accounts, context-based object recognition is supported by the parallel processing of object and scene information in separate pathways. Output of these pathways is then combined in downstream regions, leading to contextual benefits in object recognition. Alternatively, according to feedback accounts, context-based object recognition is supported by (direct or indirect) feedback from scene-selective to object-selective regions. Here, in three pre-registered transcranial magnetic stimulation (TMS) experiments, we tested a key prediction of the feedback hypothesis: that scene-selective cortex causally and selectively supports context-based object recognition before object-selective cortex does. Early visual cortex (EVC), object-selective lateral occipital cortex (LOC), and scene-selective occipital place area (OPA) were stimulated at three time points relative to stimulus onset while participants categorized degraded objects in scenes and intact objects in isolation, in different trials. Results confirmed our predictions: relative to isolated object recognition, context-based object recognition was selectively and causally supported by OPA at 160–200 ms after onset, followed by LOC at 260–300 ms after onset. These results indicate that context-based expectations facilitate object recognition by disambiguating object representations in the visual cortex.

2021 ◽  
Author(s):  
Miles Wischnewski ◽  
Marius V. Peelen

Objects can be recognized based on their intrinsic features, including shape, color, and texture. In daily life, however, such features are often not clearly visible, for example when objects appear in the periphery, in clutter, or at a distance. Interestingly, object recognition can still be highly accurate under these conditions when objects are seen within their typical scene context. What are the neural mechanisms of context-based object recognition? According to parallel processing accounts, context-based object recognition is supported by the parallel processing of object and scene information in separate pathways. Output of these pathways is then combined in downstream regions, leading to contextual benefits in object recognition. Alternatively, according to feedback accounts, context-based object recognition is supported by feedback from scene-selective to object-selective regions. Here, in three pre-registered transcranial magnetic stimulation (TMS) experiments, we tested a key prediction of the feedback hypothesis: that scene-selective cortex causally and selectively supports context-based object recognition before object-selective cortex does. Early visual cortex (EVC), object-selective lateral occipital cortex (LOC), and scene-selective occipital place area (OPA) were stimulated at three time points relative to stimulus onset while participants categorized degraded objects in scenes and intact objects in isolation, in different trials. Results confirmed our predictions: relative to isolated object recognition, context-based object recognition was selectively and causally supported by OPA at 160-200 ms after onset, followed by LOC at 260-300 ms after onset. These results indicate that context-based expectations facilitate object recognition by disambiguating object representations in visual cortex.


2019 ◽  
Vol 31 (9) ◽  
pp. 1354-1367
Author(s):  
Yael Holzinger ◽  
Shimon Ullman ◽  
Daniel Harari ◽  
Marlene Behrmann ◽  
Galia Avidan

Visual object recognition is performed effortlessly by humans notwithstanding the fact that it requires a series of complex computations, which are, as yet, not well understood. Here, we tested a novel account of the representations used for visual recognition and their neural correlates using fMRI. The rationale is based on previous research showing that a set of representations, termed “minimal recognizable configurations” (MIRCs), which are computationally derived and have unique psychophysical characteristics, serve as the building blocks of object recognition. We contrasted the BOLD responses elicited by MIRC images, derived from different categories (faces, objects, and places), sub-MIRCs, which are visually similar to MIRCs, but, instead, result in poor recognition and scrambled, unrecognizable images. Stimuli were presented in blocks, and participants indicated yes/no recognition for each image. We confirmed that MIRCs elicited higher recognition performance compared to sub-MIRCs for all three categories. Whereas fMRI activation in early visual cortex for both MIRCs and sub-MIRCs of each category did not differ from that elicited by scrambled images, high-level visual regions exhibited overall greater activation for MIRCs compared to sub-MIRCs or scrambled images. Moreover, MIRCs and sub-MIRCs from each category elicited enhanced activation in corresponding category-selective regions including fusiform face area and occipital face area (faces), lateral occipital cortex (objects), and parahippocampal place area and transverse occipital sulcus (places). These findings reveal the psychological and neural relevance of MIRCs and enable us to make progress in developing a more complete account of object recognition.


Cortex ◽  
2014 ◽  
Vol 59 ◽  
pp. 1-11 ◽  
Author(s):  
Christianne Jacobs ◽  
Tom A. de Graaf ◽  
Alexander T. Sack

2013 ◽  
Vol 26 (5) ◽  
pp. 483-502 ◽  
Author(s):  
Antonia Thelen ◽  
Micah M. Murray

This review article summarizes evidence that multisensory experiences at one point in time have long-lasting effects on subsequent unisensory visual and auditory object recognition. The efficacy of single-trial exposure to task-irrelevant multisensory events is its ability to modulate memory performance and brain activity to unisensory components of these events presented later in time. Object recognition (either visual or auditory) is enhanced if the initial multisensory experience had been semantically congruent and can be impaired if this multisensory pairing was either semantically incongruent or entailed meaningless information in the task-irrelevant modality, when compared to objects encountered exclusively in a unisensory context. Processes active during encoding cannot straightforwardly explain these effects; performance on all initial presentations was indistinguishable despite leading to opposing effects with stimulus repetitions. Brain responses to unisensory stimulus repetitions differ during early processing stages (∼100 ms post-stimulus onset) according to whether or not they had been initially paired in a multisensory context. Plus, the network exhibiting differential responses varies according to whether or not memory performance is enhanced or impaired. The collective findings we review indicate that multisensory associations formedviasingle-trial learning exert influences on later unisensory processing to promote distinct object representations that manifest as differentiable brain networks whose activity is correlated with memory performance. These influences occur incidentally, despite many intervening stimuli, and are distinguishable from the encoding/learning processes during the formation of the multisensory associations. The consequences of multisensory interactions thus persist over time to impact memory retrieval and object discrimination.


2012 ◽  
Vol 24 (4) ◽  
pp. 819-829 ◽  
Author(s):  
Henry Railo ◽  
Niina Salminen-Vaparanta ◽  
Linda Henriksson ◽  
Antti Revonsuo ◽  
Mika Koivisto

Chromatic information is processed by the visual system both at an unconscious level and at a level that results in conscious perception of color. It remains unclear whether both conscious and unconscious processing of chromatic information depend on activity in the early visual cortex or whether unconscious chromatic processing can also rely on other neural mechanisms. In this study, the contribution of early visual cortex activity to conscious and unconscious chromatic processing was studied using single-pulse TMS in three time windows 40–100 msec after stimulus onset in three conditions: conscious color recognition, forced-choice discrimination of consciously invisible color, and unconscious color priming. We found that conscious perception and both measures of unconscious processing of chromatic information depended on activity in early visual cortex 70–100 msec after stimulus presentation. Unconscious forced-choice discrimination was above chance only when participants reported perceiving some stimulus features (but not color).


2014 ◽  
Vol 26 (8) ◽  
pp. 1629-1643 ◽  
Author(s):  
Yetta Kwailing Wong ◽  
Cynthia Peng ◽  
Kristyn N. Fratus ◽  
Geoffrey F. Woodman ◽  
Isabel Gauthier

Most theories of visual processing propose that object recognition is achieved in higher visual cortex. However, we show that category selectivity for musical notation can be observed in the first ERP component called the C1 (measured 40–60 msec after stimulus onset) with music-reading expertise. Moreover, the C1 note selectivity was observed only when the stimulus category was blocked but not when the stimulus category was randomized. Under blocking, the C1 activity for notes predicted individual music-reading ability, and behavioral judgments of musical stimuli reflected music-reading skill. Our results challenge current theories of object recognition, indicating that the primary visual cortex can be selective for musical notation within the initial feedforward sweep of activity with perceptual expertise and with a testing context that is consistent with the expertise training, such as blocking the stimulus category for music reading.


2018 ◽  
Vol 120 (2) ◽  
pp. 848-853 ◽  
Author(s):  
Daniel Kaiser ◽  
Radoslaw M. Cichy

Natural environments consist of multiple objects, many of which repeatedly occupy similar locations within a scene. For example, hats are seen on people’s heads, while shoes are most often seen close to the ground. Such positional regularities bias the distribution of objects across the visual field: hats are more often encountered in the upper visual field, while shoes are more often encountered in the lower visual field. Here we tested the hypothesis that typical visual field locations of objects facilitate cortical processing. We recorded functional MRI while participants viewed images of objects that were associated with upper or lower visual field locations. Using multivariate classification, we show that object information can be more successfully decoded from response patterns in object-selective lateral occipital cortex (LO) when the objects are presented in their typical location (e.g., shoe in the lower visual field) than when they are presented in an atypical location (e.g., shoe in the upper visual field). In a functional connectivity analysis, we relate this benefit to increased coupling between LO and early visual cortex, suggesting that typical object positioning facilitates information propagation across the visual hierarchy. Together these results suggest that object representations in occipital visual cortex are tuned to the structure of natural environments. This tuning may support object perception in spatially structured environments. NEW & NOTEWORTHY In the real world, objects appear in predictable spatial locations. Hats, commonly appearing on people’s heads, often fall into the upper visual field. Shoes, mostly appearing on people’s feet, often fall into the lower visual field. Here we used functional MRI to demonstrate that such regularities facilitate cortical processing: Objects encountered in their typical locations are coded more efficiently, which may allow us to effortlessly recognize objects in natural environments.


Sign in / Sign up

Export Citation Format

Share Document