scholarly journals Disentangling Representations of Object Shape and Object Category in Human Visual Cortex: The Animate–Inanimate Distinction

2016 ◽  
Vol 28 (5) ◽  
pp. 680-692 ◽  
Author(s):  
Daria Proklova ◽  
Daniel Kaiser ◽  
Marius V. Peelen

Objects belonging to different categories evoke reliably different fMRI activity patterns in human occipitotemporal cortex, with the most prominent distinction being that between animate and inanimate objects. An unresolved question is whether these categorical distinctions reflect category-associated visual properties of objects or whether they genuinely reflect object category. Here, we addressed this question by measuring fMRI responses to animate and inanimate objects that were closely matched for shape and low-level visual features. Univariate contrasts revealed animate- and inanimate-preferring regions in ventral and lateral temporal cortex even for individually matched object pairs (e.g., snake–rope). Using representational similarity analysis, we mapped out brain regions in which the pairwise dissimilarity of multivoxel activity patterns (neural dissimilarity) was predicted by the objects' pairwise visual dissimilarity and/or their categorical dissimilarity. Visual dissimilarity was measured as the time it took participants to find a unique target among identical distractors in three visual search experiments, where we separately quantified overall dissimilarity, outline dissimilarity, and texture dissimilarity. All three visual dissimilarity structures predicted neural dissimilarity in regions of visual cortex. Interestingly, these analyses revealed several clusters in which categorical dissimilarity predicted neural dissimilarity after regressing out visual dissimilarity. Together, these results suggest that the animate–inanimate organization of human visual cortex is not fully explained by differences in the characteristic shape or texture properties of animals and inanimate objects. Instead, representations of visual object properties and object category may coexist in more anterior parts of the visual system.

2015 ◽  
Vol 27 (11) ◽  
pp. 2117-2125 ◽  
Author(s):  
Reshanne R. Reeder ◽  
Francesca Perini ◽  
Marius V. Peelen

Theories of visual selective attention propose that top–down preparatory attention signals mediate the selection of task-relevant information in cluttered scenes. Neuroimaging and electrophysiology studies have provided correlative evidence for this hypothesis, finding increased activity in target-selective neural populations in visual cortex in the period between a search cue and target onset. In this study, we used online TMS to test whether preparatory neural activity in visual cortex is causally involved in naturalistic object detection. In two experiments, participants detected the presence of object categories (cars, people) in a diverse set of photographs of real-world scenes. TMS was applied over a region in posterior temporal cortex identified by fMRI as carrying category-specific preparatory activity patterns. Results showed that TMS applied over posterior temporal cortex before scene onset (−200 and −100 msec) impaired the detection of object categories in subsequently presented scenes, relative to vertex and early visual cortex stimulation. This effect was specific to category level detection and was related to the type of attentional template participants adopted, with the strongest effects observed in participants adopting category level templates. These results provide evidence for a causal role of preparatory attention in mediating the detection of objects in cluttered daily-life environments.


10.1167/8.7.2 ◽  
2008 ◽  
Vol 8 (7) ◽  
pp. 2 ◽  
Author(s):  
Fang Fang ◽  
Daniel Kersten ◽  
Scott O. Murray

2020 ◽  
Author(s):  
Munendo Fujimichi ◽  
Hiroki Yamamoto ◽  
Jun Saiki

Are visual representations in the human early visual cortex necessary for visual working memory (VWM)? Previous studies suggest that VWM is underpinned by distributed representations across several brain regions, including the early visual cortex. Notably, in these studies, participants had to memorize images under consistent visual conditions. However, in our daily lives, we must retain the essential visual properties of objects despite changes in illumination or viewpoint. The role of brain regions—particularly the early visual cortices—in these situations remains unclear. The present study investigated whether the early visual cortex was essential for achieving stable VWM. Focusing on VWM for object surface properties, we conducted fMRI experiments while male and female participants performed a delayed roughness discrimination task in which sample and probe spheres were presented under varying illumination. By applying multi-voxel pattern analysis to brain activity in regions of interest, we found that the ventral visual cortex and intraparietal sulcus were involved in roughness VWM under changing illumination conditions. In contrast, VWM was not supported as robustly by the early visual cortex. These findings show that visual representations in the early visual cortex alone are insufficient for the robust roughness VWM representation required during changes in illumination.


2019 ◽  
Author(s):  
Astrid A. Zeman ◽  
J. Brendan Ritchie ◽  
Stefania Bracci ◽  
Hans Op de Beeck

AbstractDeep Convolutional Neural Networks (CNNs) are gaining traction as the benchmark model of visual object recognition, with performance now surpassing humans. While CNNs can accurately assign one image to potentially thousands of categories, network performance could be the result of layers that are tuned to represent the visual shape of objects, rather than object category, since both are often confounded in natural images. Using two stimulus sets that explicitly dissociate shape from category, we correlate these two types of information with each layer of multiple CNNs. We also compare CNN output with fMRI activation along the human visual ventral stream by correlating artificial with biological representations. We find that CNNs encode category information independently from shape, peaking at the final fully connected layer in all tested CNN architectures. Comparing CNNs with fMRI brain data, early visual cortex (V1) and early layers of CNNs encode shape information. Anterior ventral temporal cortex encodes category information, which correlates best with the final layer of CNNs. The interaction between shape and category that is found along the human visual ventral pathway is echoed in multiple deep networks. Our results suggest CNNs represent category information independently from shape, much like the human visual system.


2020 ◽  
Vol 31 (1) ◽  
pp. 603-619 ◽  
Author(s):  
Mona Rosenke ◽  
Rick van Hoof ◽  
Job van den Hurk ◽  
Kalanit Grill-Spector ◽  
Rainer Goebel

Abstract Human visual cortex contains many retinotopic and category-specific regions. These brain regions have been the focus of a large body of functional magnetic resonance imaging research, significantly expanding our understanding of visual processing. As studying these regions requires accurate localization of their cortical location, researchers perform functional localizer scans to identify these regions in each individual. However, it is not always possible to conduct these localizer scans. Here, we developed and validated a functional region of interest (ROI) atlas of early visual and category-selective regions in human ventral and lateral occipito-temporal cortex. Results show that for the majority of functionally defined ROIs, cortex-based alignment results in lower between-subject variability compared to nonlinear volumetric alignment. Furthermore, we demonstrate that 1) the atlas accurately predicts the location of an independent dataset of ventral temporal cortex ROIs and other atlases of place selectivity, motion selectivity, and retinotopy. Next, 2) we show that the majority of voxel within our atlas is responding mostly to the labeled category in a left-out subject cross-validation, demonstrating the utility of this atlas. The functional atlas is publicly available (download.brainvoyager.com/data/visfAtlas.zip) and can help identify the location of these regions in healthy subjects as well as populations (e.g., blind people, infants) in which functional localizers cannot be run.


2016 ◽  
Vol 115 (4) ◽  
pp. 2246-2250 ◽  
Author(s):  
Daniel Kaiser ◽  
Damiano C. Azzalini ◽  
Marius V. Peelen

Neuroimaging research has identified category-specific neural response patterns to a limited set of object categories. For example, faces, bodies, and scenes evoke activity patterns in visual cortex that are uniquely traceable in space and time. It is currently debated whether these apparently categorical responses truly reflect selectivity for categories or instead reflect selectivity for category-associated shape properties. In the present study, we used a cross-classification approach on functional MRI (fMRI) and magnetoencephalographic (MEG) data to reveal both category-independent shape responses and shape-independent category responses. Participants viewed human body parts (hands and torsos) and pieces of clothing that were closely shape-matched to the body parts (gloves and shirts). Category-independent shape responses were revealed by training multivariate classifiers on discriminating shape within one category (e.g., hands versus torsos) and testing these classifiers on discriminating shape within the other category (e.g., gloves versus shirts). This analysis revealed significant decoding in large clusters in visual cortex (fMRI) starting from 90 ms after stimulus onset (MEG). Shape-independent category responses were revealed by training classifiers on discriminating object category (bodies and clothes) within one shape (e.g., hands versus gloves) and testing these classifiers on discriminating category within the other shape (e.g., torsos versus shirts). This analysis revealed significant decoding in bilateral occipitotemporal cortex (fMRI) and from 130 to 200 ms after stimulus onset (MEG). Together, these findings provide evidence for concurrent shape and category selectivity in high-level visual cortex, including category-level responses that are not fully explicable by two-dimensional shape properties.


2017 ◽  
Author(s):  
Daria Proklova ◽  
Daniel Kaiser ◽  
Marius V. Peelen

AbstractHuman high-level visual cortex shows a distinction between animate and inanimate objects, as revealed by fMRI. Recent studies have shown that object animacy can similarly be decoded from MEG sensor patterns. Which object properties drive this decoding? Here, we disentangled the influence of perceptual and categorical object properties by presenting perceptually matched objects (e.g., snake and rope) that were nonetheless easily recognizable as being animate or inanimate. In a series of behavioral experiments, three aspects of perceptual dissimilarity of these objects were quantified: overall dissimilarity, outline dissimilarity, and texture dissimilarity. Neural dissimilarity of MEG sensor patterns was modeled using regression analysis, in which perceptual dissimilarity (from the behavioral experiments) and categorical dissimilarity served as predictors of neural dissimilarity. We found that perceptual dissimilarity was strongly reflected in MEG sensor patterns from 80ms after stimulus onset, with separable contributions of outline and texture dissimilarity. Surprisingly, when controlling for perceptual dissimilarity, MEG patterns did not carry information about object category (animate vs inanimate) at any time point. Nearly identical results were found in a second MEG experiment that required basic-level object recognition. These results suggest that MEG sensor patterns do not capture object animacy independently of perceptual differences between animate and inanimate objects. This is in contrast to results observed in fMRI using the same stimuli, task, and analysis approach: fMRI showed a highly reliable categorical distinction in visual cortex even when controlling for perceptual dissimilarity. Results thus point to a discrepancy in the information contained in multivariate fMRI and MEG patterns.


2021 ◽  
Vol 17 (8) ◽  
pp. e1009267
Author(s):  
Kshitij Dwivedi ◽  
Michael F. Bonner ◽  
Radoslaw Martin Cichy ◽  
Gemma Roig

The human visual cortex enables visual perception through a cascade of hierarchical computations in cortical regions with distinct functionalities. Here, we introduce an AI-driven approach to discover the functional mapping of the visual cortex. We related human brain responses to scene images measured with functional MRI (fMRI) systematically to a diverse set of deep neural networks (DNNs) optimized to perform different scene perception tasks. We found a structured mapping between DNN tasks and brain regions along the ventral and dorsal visual streams. Low-level visual tasks mapped onto early brain regions, 3-dimensional scene perception tasks mapped onto the dorsal stream, and semantic tasks mapped onto the ventral stream. This mapping was of high fidelity, with more than 60% of the explainable variance in nine key regions being explained. Together, our results provide a novel functional mapping of the human visual cortex and demonstrate the power of the computational approach.


2017 ◽  
Vol 118 (6) ◽  
pp. 3194-3214 ◽  
Author(s):  
Rosemary A. Cowell ◽  
Krystal R. Leger ◽  
John T. Serences

Identifying an object and distinguishing it from similar items depends upon the ability to perceive its component parts as conjoined into a cohesive whole, but the brain mechanisms underlying this ability remain elusive. The ventral visual processing pathway in primates is organized hierarchically: Neuronal responses in early stages are sensitive to the manipulation of simple visual features, whereas neuronal responses in subsequent stages are tuned to increasingly complex stimulus attributes. It is widely assumed that feature-coding dominates in early visual cortex whereas later visual regions employ conjunction-coding in which object representations are different from the sum of their simple feature parts. However, no study in humans has demonstrated that putative object-level codes in higher visual cortex cannot be accounted for by feature-coding and that putative feature codes in regions prior to ventral temporal cortex are not equally well characterized as object-level codes. Thus the existence of a transition from feature- to conjunction-coding in human visual cortex remains unconfirmed, and if a transition does occur its location remains unknown. By employing multivariate analysis of functional imaging data, we measure both feature-coding and conjunction-coding directly, using the same set of visual stimuli, and pit them against each other to reveal the relative dominance of one vs. the other throughout cortex. Our results reveal a transition from feature-coding in early visual cortex to conjunction-coding in both inferior temporal and posterior parietal cortices. This novel method enables the use of experimentally controlled stimulus features to investigate population-level feature and conjunction codes throughout human cortex. NEW & NOTEWORTHY We use a novel analysis of neuroimaging data to assess representations throughout visual cortex, revealing a transition from feature-coding to conjunction-coding along both ventral and dorsal pathways. Occipital cortex contains more information about spatial frequency and contour than about conjunctions of those features, whereas inferotemporal and parietal cortices contain conjunction coding sites in which there is more information about the whole stimulus than its component parts.


Sign in / Sign up

Export Citation Format

Share Document