scholarly journals Attention selectively reshapes the geometry of distributed semantic representation

2016 ◽  
Author(s):  
Samuel A. Nastase ◽  
Andrew C. Connolly ◽  
Nikolaas N. Oosterhof ◽  
Yaroslav O. Halchenko ◽  
J. Swaroop Guntupalli ◽  
...  

AbstractHumans prioritize different semantic qualities of a complex stimulus depending on their behavioral goals. These semantic features are encoded in distributed neural populations, yet it is unclear how attention might operate across these distributed representations. To address this, we presented participants with naturalistic video clips of animals behaving in their natural environments while the participants attended to either behavior or taxonomy. We used models of representational geometry to investigate how attentional allocation affects the distributed neural representation of animal behavior and taxonomy. Attending to animal behavior transiently increased the discriminability of distributed population codes for observed actions in anterior intraparietal, pericentral, and ventral temporal cortices. Attending to animal taxonomy while viewing the same stimuli increased the discriminability of distributed animal category representations in ventral temporal cortex. For both tasks, attention selectively enhanced the discriminability of response patterns along behaviorally relevant dimensions. These findings suggest that behavioral goals alter how the brain extracts semantic features from the visual world. Attention effectively disentangles population responses for downstream read-out by sculpting representational geometry in late-stage perceptual areas.

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Timothy T Rogers ◽  
Christopher R Cox ◽  
Qihong Lu ◽  
Akihiro Shimotake ◽  
Takayuki Kikuch ◽  
...  

How does the human brain encode semantic information about objects? This paper reconciles two seemingly contradictory views. The first proposes that local neural populations independently encode semantic features; the second, that semantic representations arise as a dynamic distributed code that changes radically with stimulus processing. Combining simulations with a well-known neural network model of semantic memory, multivariate pattern classification, and human electrocorticography, we find that both views are partially correct: information about the animacy of a depicted stimulus is distributed across ventral temporal cortex in a dynamic code possessing feature-like elements posteriorly but with elements that change rapidly and nonlinearly in anterior regions. This pattern is consistent with the view that anterior temporal lobes serve as a deep cross-modal ‘hub’ in an interactive semantic network, and more generally suggests that tertiary association cortices may adopt dynamic distributed codes difficult to detect with common brain imaging methods.


2020 ◽  
Author(s):  
D. Proklova ◽  
M.A. Goodale

AbstractAnimate and inanimate objects elicit distinct response patterns in the human ventral temporal cortex (VTC), but the exact features driving this distinction are still poorly understood. One prominent feature that distinguishes typical animals from inanimate objects and that could potentially explain the animate-inanimate distinction in the VTC is the presence of a face. In the current fMRI study, we investigated this possibility by creating a stimulus set that included animals with faces, faceless animals, and inanimate objects, carefully matched in order to minimize other visual differences. We used both searchlight-based and ROI-based representational similarity analysis (RSA) to test whether the presence of a face explains the animate-inanimate distinction in the VTC. The searchlight analysis revealed that when animals with faces were removed from the analysis, the animate-inanimate distinction almost disappeared. The ROI-based RSA revealed a similar pattern of results, but also showed that, even in the absence of faces, information about agency (a combination of animal’s ability to move and think) is present in parts of the VTC that are sensitive to animacy. Together, these analyses showed that animals with faces do elicit a stronger animate/inanimate response in the VTC, but that this effect is driven not by faces per se, or the visual features of faces, but by other factors that correlate with face presence, such as the capacity for self-movement and thought. In short, the VTC appears to treat the face as a proxy for agency, a ubiquitous feature of familiar animals.Significance StatementMany studies have shown that images of animals are processed differently from inanimate objects in the human brain, particularly in the ventral temporal cortex (VTC). However, what features drive this distinction remains unclear. One important feature that distinguishes many animals from inanimate objects is a face. Here, we used fMRI to test whether the animate/inanimate distinction is driven by the presence of faces. We found that the presence of faces did indeed boost activity related to animacy in the VTC. A more detailed analysis, however, revealed that it was the association between faces and other attributes such as the capacity for self-movement and thinking, not the faces per se, that was driving the activity we observed.


2006 ◽  
Vol 6 (1) ◽  
Author(s):  
Andreas Löw ◽  
Brigitte Rockstroh ◽  
Thomas Elbert ◽  
Yaron Silberman ◽  
Shlomo Bentin

2020 ◽  
Author(s):  
Brett B. Bankson ◽  
Matthew J. Boring ◽  
R. Mark Richardson ◽  
Avniel Singh Ghuman

ABSTRACTAn enduring neuroscientific debate concerns the extent to which neural representation is restricted to networks of patches specialized for particular domains of perceptual input (Kaniwsher et al., 1997; Livingstone et al., 2019), or distributed outside of these patches to broad areas of cortex as well (Haxby et al., 2001; Op de Beeck, 2008). A critical level for this debate is the localization of the neural representation of the identity of individual images, (Spiridon & Kanwisher, 2002) such as individual-level face or written word recognition. To address this debate, intracranial recordings from 489 electrodes throughout ventral temporal cortex across 17 human subjects were used to assess the spatiotemporal dynamics of individual word and face processing within and outside cortical patches strongly selective for these categories of visual information. Individual faces and words were first represented primarily only in strongly selective patches and then represented in both strongly and weakly selective areas approximately 170 milliseconds later. Strongly and weakly selective areas contributed non-redundant information to the representation of individual images. These results can reconcile previous results endorsing disparate poles of the domain specificity debate by highlighting the temporally segregated contributions of different functionally defined cortical areas to individual level representations. Taken together, this work supports a dynamic model of neural representation characterized by successive domain-specific and distributed processing stages.SIGNIFICANCE STATEMENTThe visual processing system performs dynamic computations to differentiate visually similar forms, such as identifying individual words and faces. Previous models have localized these computations to 1) circumscribed, specialized portions of the brain, or 2) more distributed aspects of the brain. The current work combines machine learning analyses with human intracranial recordings to determine the neurodynamics of individual face and word processing in and outside of brain regions selective for these visual categories. The results suggest that individuation involves computations that occur first in primarily highly selective parts of the visual processing system, then later recruits highly and non-highly selective regions. These results mediate between extant models of neural specialization by suggesting a dynamic domain specificity model of visual processing.


2019 ◽  
Author(s):  
Timothy T. Rogers ◽  
Christopher Cox ◽  
Qihong Lu ◽  
Akihiro Shimotake ◽  
Takayuki Kikuch ◽  
...  

AbstractHow does the human brain encode semantic information about objects? This paper reconciles two seemingly contradictory views. The first proposes that local neural populations independently encode semantic features; the second, that semantic representations arise as a dynamic distributed code that changes radically with stimulus processing. Combining simulations with a well-known neural network model of semantic memory, multivariate pattern classification, and human electrocorticography, we find that both views are partially correct: semantic information is distributed across ventral temporal cortex in a dynamic code that possesses stable feature-like elements in posterior regions but with elements that change rapidly and nonlinearly in anterior regions. This pattern is consistent with the view that anterior temporal lobes serve as a deep cross-modal “hub” in an interactive semantic network, and more generally suggests that tertiary association cortices may adopt dynamic distributed codes difficult to detect with common brain imaging methods.


Sign in / Sign up

Export Citation Format

Share Document