scholarly journals Neural coding of fine-grained object knowledge in perirhinal cortex

2017 ◽  
Author(s):  
Amy Rose Price ◽  
Michael F. Bonner ◽  
Jonathan E. Peelle ◽  
Murray Grossman

SummaryOver 40 years of research has examined the role of the ventral visual stream in transforming retinal inputs into high-level representations of object identity [1–6]. However, there remains an ongoing debate over the role of the ventral stream in coding abstract semantic content, which relies on stored knowledge, versus perceptual content that relies only on retinal inputs [7–12]. A major difficulty in adjudicating between these mechanisms is that the semantic similarity of objects is often highly confounded with their perceptual similarity (e.g., animate things are more perceptually similar to other animate things than to inanimate things). To address this problem, we developed a paradigm that exploits the statistical regularities of object colors while perfectly controlling for perceptual shape information, allowing us to dissociate lower-level perceptual features (i.e., color perception) from higher-level semantic knowledge (i.e., color meaning). Using multivoxel-pattern analyses of fMRI data, we observed a striking double dissociation between the processing of color information at a perceptual and at a semantic level along the posterior to anterior axis of the ventral visual pathway. Specifically, we found that the visual association region V4 assigned similar representations to objects with similar colors, regardless of object category. In contrast, perirhinal cortex, at the apex of the ventral visual stream, assigned similar representations to semantically similar objects, even when this was in opposition to their perceptual similarity. These findings suggest that perirhinal cortex untangles the representational space of lower-level perceptual features and organizes visual objects according to their semantic interpretations.

2017 ◽  
Author(s):  
Chris B Martin ◽  
Danielle Douglas ◽  
Rachel N Newsome ◽  
Louisa LY Man ◽  
Morgan D Barense

AbstractA tremendous body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully-specified object concepts through the integration of their visual and conceptual features.


Author(s):  
Maya L. Rosen ◽  
Lucy A. Lurie ◽  
Kelly A. Sambrook ◽  
Andrew N. Meltzoff ◽  
Katie A. McLaughlin

2018 ◽  
Vol 115 (35) ◽  
pp. 8835-8840 ◽  
Author(s):  
Hanlin Tang ◽  
Martin Schrimpf ◽  
William Lotter ◽  
Charlotte Moerman ◽  
Ana Paredes ◽  
...  

Making inferences from partial information constitutes a critical aspect of cognition. During visual perception, pattern completion enables recognition of poorly visible or occluded objects. We combined psychophysics, physiology, and computational models to test the hypothesis that pattern completion is implemented by recurrent computations and present three pieces of evidence that are consistent with this hypothesis. First, subjects robustly recognized objects even when they were rendered <15% visible, but recognition was largely impaired when processing was interrupted by backward masking. Second, invasive physiological responses along the human ventral cortex exhibited visually selective responses to partially visible objects that were delayed compared with whole objects, suggesting the need for additional computations. These physiological delays were correlated with the effects of backward masking. Third, state-of-the-art feed-forward computational architectures were not robust to partial visibility. However, recognition performance was recovered when the model was augmented with attractor-based recurrent connectivity. The recurrent model was able to predict which images of heavily occluded objects were easier or harder for humans to recognize, could capture the effect of introducing a backward mask on recognition behavior, and was consistent with the physiological delays along the human ventral visual stream. These results provide a strong argument of plausibility for the role of recurrent computations in making visual inferences from partial information.


eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Chris B Martin ◽  
Danielle Douglas ◽  
Rachel N Newsome ◽  
Louisa LY Man ◽  
Morgan D Barense

A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features.


2019 ◽  
Author(s):  
Sushrut Thorat

A mediolateral gradation in neural responses for images spanning animals to artificial objects is observed in the ventral temporal cortex (VTC). Which information streams drive this organisation is an ongoing debate. Recently, in Proklova et al. (2016), the visual shape and category (“animacy”) dimensions in a set of stimuli were dissociated using a behavioural measure of visual feature information. fMRI responses revealed a neural cluster (extra-visual animacy cluster - xVAC) which encoded category information unexplained by visual feature information, suggesting extra-visual contributions to the organisation in the ventral visual stream. We reassess these findings using Convolutional Neural Networks (CNNs) as models for the ventral visual stream. The visual features developed in the CNN layers can categorise the shape-matched stimuli from Proklova et al. (2016) in contrast to the behavioural measures used in the study. The category organisations in xVAC and VTC are explained to a large degree by the CNN visual feature differences, casting doubt over the suggestion that visual feature differences cannot account for the animacy organisation. To inform the debate further, we designed a set of stimuli with animal images to dissociate the animacy organisation driven by the CNN visual features from the degree of familiarity and agency (thoughtfulness and feelings). Preliminary results from a new fMRI experiment designed to understand the contribution of these non-visual features are presented.


Sign in / Sign up

Export Citation Format

Share Document