scholarly journals Integrative and distinctive coding of perceptual and conceptual object features in the ventral visual stream

2017 ◽  
Author(s):  
Chris B Martin ◽  
Danielle Douglas ◽  
Rachel N Newsome ◽  
Louisa LY Man ◽  
Morgan D Barense

AbstractA tremendous body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully-specified object concepts through the integration of their visual and conceptual features.

eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Chris B Martin ◽  
Danielle Douglas ◽  
Rachel N Newsome ◽  
Louisa LY Man ◽  
Morgan D Barense

A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features.


2019 ◽  
Author(s):  
Sushrut Thorat

A mediolateral gradation in neural responses for images spanning animals to artificial objects is observed in the ventral temporal cortex (VTC). Which information streams drive this organisation is an ongoing debate. Recently, in Proklova et al. (2016), the visual shape and category (“animacy”) dimensions in a set of stimuli were dissociated using a behavioural measure of visual feature information. fMRI responses revealed a neural cluster (extra-visual animacy cluster - xVAC) which encoded category information unexplained by visual feature information, suggesting extra-visual contributions to the organisation in the ventral visual stream. We reassess these findings using Convolutional Neural Networks (CNNs) as models for the ventral visual stream. The visual features developed in the CNN layers can categorise the shape-matched stimuli from Proklova et al. (2016) in contrast to the behavioural measures used in the study. The category organisations in xVAC and VTC are explained to a large degree by the CNN visual feature differences, casting doubt over the suggestion that visual feature differences cannot account for the animacy organisation. To inform the debate further, we designed a set of stimuli with animal images to dissociate the animacy organisation driven by the CNN visual features from the degree of familiarity and agency (thoughtfulness and feelings). Preliminary results from a new fMRI experiment designed to understand the contribution of these non-visual features are presented.


2015 ◽  
pp. bhv226 ◽  
Author(s):  
Corinna Pehrs ◽  
Jamil Zaki ◽  
Lorna H. Schlochtermeier ◽  
Arthur M. Jacobs ◽  
Lars Kuchinke ◽  
...  

Author(s):  
Darya Frank ◽  
Alex Kafkas ◽  
Daniela Montaldi

Expectation violation has been shown to engage adaptive memory formation, resulting in better memory for unexpected information. In two experiments we tested whether this mechanism is engaged dynamically in a goal-irrelevant manner during retrieval, and how it affects trial-by-trial recognition. Participants encoded images of objects, and then learned a contingency between a cue and category (man-made or natural) with new objects. Targets and parametrically manipulated similar foils, comprising set events, were used at retrieval. In each retrieval trial a cue appeared, which either matched or mismatched (according to the established contingency) the following object, for which participants made an old/new decision. We found that unexpected events at retrieval were associated with increased activation along the ventral visual stream, whereas expected events engaged parietal regions of the core recollection network. For targets and most similar foils, we found an interaction between current and previous expectation status on memory performance, such that expected events following unexpected ones (UprevEcurr) showed a boost in performance. This behavioural effect was associated with activation in the hippocampus, SN/VTA and occipital cortex. A combination of two unexpected events (UprevUcurr) resulted in the poorest memory performance and was associated with increased activation in occipital cortex. Taken together, our findings suggest expectation violation engages an encoding mechanism, supported by bottom-up processing, in a task-independent manner. Therefore, when the goal is to retrieve information, the mnemonic consequences of the shift towards an encoding state is detrimental in real-time, but beneficial for subsequent similar events.


2021 ◽  
Author(s):  
Moritz Wurm ◽  
Alfonso Caramazza

The ventral visual stream is conceived as a pathway for object recognition. However, we also recognize the actions an object can be involved in. Here, we show that action recognition relies on a pathway in lateral occipitotemporal cortex, partially overlapping and topographically aligned with object representations that are precursors for action recognition. By contrast, object features that are more relevant for object recognition, such as color and texture, are restricted to medial areas of the ventral stream. We argue that the ventral stream bifurcates into lateral and medial pathways for action and object recognition, respectively. This account explains a number of observed phenomena, such as the duplication of object domains and the specific representational profiles in lateral and medial areas.


2016 ◽  
Author(s):  
David A. Ross ◽  
Patrick Sadil ◽  
D. Merika Wilson ◽  
Rosemary A. Cowell

SummaryThe hippocampus is considered pivotal to recall, allowing retrieval of information not available in the immediate environment. In contrast, neocortex is thought to signal familiarity, and to contribute to recall only when called upon by the hippocampus. However, this view is not compatible with representational accounts of memory, which reject the mapping of cognitive processes onto brain regions. According to representational accounts, the hippocampus is not engaged by recall per se, rather it is engaged whenever hippocampal representations are required. To test whether hippocampus is engaged by recall when hippocampal representations are not required, we used functional imaging and a non-associative recall task, with images (objects, scenes) studied in isolation, and image-patches used as cues. As predicted by a representational account, hippocampal activation increased during recall of scenes – which are known to be processed by hippocampus – but not during recall of objects. Object recall instead engaged neocortical regions known to be involved in object-processing. Further supporting the representational account, effective connectivity analyses revealed that recall was associated with increased information flow out of lateral occipital cortex (object recall) and parahippocampal cortex (scene recall), suggesting that recall-related activation spread from neocortex to hippocampus, not the reverse.


2017 ◽  
Author(s):  
Radoslaw M. Cichy ◽  
Nikolaus Kriegeskorte ◽  
Kamila M. Jozwik ◽  
Jasper J.F. van den Bosch ◽  
Ian Charest

1AbstractVision involves complex neuronal dynamics that link the sensory stream to behaviour. To capture the richness and complexity of the visual world and the behaviour it entails, we used an ecologically valid task with a rich set of real-world object images. We investigated how human brain activity, resolved in space with functional MRI and in time with magnetoencephalography, links the sensory stream to behavioural responses. We found that behaviour-related brain activity emerged rapidly in the ventral visual pathway within 200ms of stimulus onset. The link between stimuli, brain activity, and behaviour could not be accounted for by either category membership or visual features (as provided by an artificial deep neural network model). Our results identify behaviourally-relevant brain activity during object vision, and suggest that object representations guiding behaviour are complex and can neither be explained by visual features or semantic categories alone. Our findings support the view that visual representations in the ventral visual stream need to be understood in terms of their relevance to behaviour, and highlight the importance of complex behavioural assessment for human brain mapping.


2018 ◽  
Author(s):  
Diana C. Dima ◽  
Krish D. Singh

AbstractHumans can rapidly extract information from faces even in challenging viewing conditions, yet the neural representations supporting this ability are still not well understood. Here, we manipulated the presentation duration of backward-masked facial expressions and used magnetoencephalography (MEG) to investigate the computations underpinning rapid face processing. Multivariate analyses revealed two stages in face perception, with the ventral visual stream encoding facial features prior to facial configuration. When presentation time was reduced, the emergence of sustained featural and configural representations was delayed. Importantly, these representations explained behaviour during an expression recognition task. Together, these results describe the adaptable system linking visual features, brain and behaviour during face perception.


2021 ◽  
Author(s):  
Talia Konkle ◽  
George A Alvarez

Anterior regions of the ventral visual stream have substantial information about object categories, prompting theories that category-level forces are critical for shaping visual representation. The strong correspondence between category-supervised deep neural networks and ventral stream representation supports this view, but does not provide a viable learning model, as these deepnets rely upon millions of labeled examples. Here we present a fully self-supervised model which instead learns to represent individual images, where views of the same image are embedded nearby in a low-dimensional feature space, distinctly from other recently encountered views. We find category information implicitly emerges in the feature space, and critically that these models achieve parity with category-supervised models in predicting the hierarchical structure of brain responses across the human ventral visual stream. These results provide computational support for learning instance-level representation as a viable goal of the ventral stream, offering an alternative to the category-based framework that has been dominant in visual cognitive neuroscience.


Sign in / Sign up

Export Citation Format

Share Document