scholarly journals Evidence for a deep, distributed and dynamic semantic code in human ventral anterior temporal cortex

2019 ◽  
Author(s):  
Timothy T. Rogers ◽  
Christopher Cox ◽  
Qihong Lu ◽  
Akihiro Shimotake ◽  
Takayuki Kikuch ◽  
...  

AbstractHow does the human brain encode semantic information about objects? This paper reconciles two seemingly contradictory views. The first proposes that local neural populations independently encode semantic features; the second, that semantic representations arise as a dynamic distributed code that changes radically with stimulus processing. Combining simulations with a well-known neural network model of semantic memory, multivariate pattern classification, and human electrocorticography, we find that both views are partially correct: semantic information is distributed across ventral temporal cortex in a dynamic code that possesses stable feature-like elements in posterior regions but with elements that change rapidly and nonlinearly in anterior regions. This pattern is consistent with the view that anterior temporal lobes serve as a deep cross-modal “hub” in an interactive semantic network, and more generally suggests that tertiary association cortices may adopt dynamic distributed codes difficult to detect with common brain imaging methods.

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Timothy T Rogers ◽  
Christopher R Cox ◽  
Qihong Lu ◽  
Akihiro Shimotake ◽  
Takayuki Kikuch ◽  
...  

How does the human brain encode semantic information about objects? This paper reconciles two seemingly contradictory views. The first proposes that local neural populations independently encode semantic features; the second, that semantic representations arise as a dynamic distributed code that changes radically with stimulus processing. Combining simulations with a well-known neural network model of semantic memory, multivariate pattern classification, and human electrocorticography, we find that both views are partially correct: information about the animacy of a depicted stimulus is distributed across ventral temporal cortex in a dynamic code possessing feature-like elements posteriorly but with elements that change rapidly and nonlinearly in anterior regions. This pattern is consistent with the view that anterior temporal lobes serve as a deep cross-modal ‘hub’ in an interactive semantic network, and more generally suggests that tertiary association cortices may adopt dynamic distributed codes difficult to detect with common brain imaging methods.


2013 ◽  
Vol 25 (11) ◽  
pp. 1777-1793 ◽  
Author(s):  
Rosemary A. Cowell ◽  
Garrison W. Cottrell

We trained a neurocomputational model on six categories of photographic images that were used in a previous fMRI study of object and face processing. Multivariate pattern analyses of the activations elicited in the object-encoding layer of the model yielded results consistent with two previous, contradictory fMRI studies. Findings from one of the studies [Haxby, J. V., Gobbini, M. I., Furey, M. L., Ishai, A., Schouten, J. L., & Pietrini, P. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293, 2425–2430, 2001] were interpreted as evidence for the object-form topography model. Findings from the other study [Spiridon, M., & Kanwisher, N. How distributed is visual category information in human occipito-temporal cortex? An fMRI study. Neuron, 35, 1157–1165, 2002] were interpreted as evidence for neural processing mechanisms in the fusiform face area that are specialized for faces. Because the model contains no special processing mechanism or specialized architecture for faces and yet it can reproduce the fMRI findings used to support the claim that there are specialized face-processing neurons, we argue that these fMRI results do not actually support that claim. Results from our neurocomputational model therefore constitute a cautionary tale for the interpretation of fMRI data.


2016 ◽  
Author(s):  
Samuel A. Nastase ◽  
Andrew C. Connolly ◽  
Nikolaas N. Oosterhof ◽  
Yaroslav O. Halchenko ◽  
J. Swaroop Guntupalli ◽  
...  

AbstractHumans prioritize different semantic qualities of a complex stimulus depending on their behavioral goals. These semantic features are encoded in distributed neural populations, yet it is unclear how attention might operate across these distributed representations. To address this, we presented participants with naturalistic video clips of animals behaving in their natural environments while the participants attended to either behavior or taxonomy. We used models of representational geometry to investigate how attentional allocation affects the distributed neural representation of animal behavior and taxonomy. Attending to animal behavior transiently increased the discriminability of distributed population codes for observed actions in anterior intraparietal, pericentral, and ventral temporal cortices. Attending to animal taxonomy while viewing the same stimuli increased the discriminability of distributed animal category representations in ventral temporal cortex. For both tasks, attention selectively enhanced the discriminability of response patterns along behaviorally relevant dimensions. These findings suggest that behavioral goals alter how the brain extracts semantic features from the visual world. Attention effectively disentangles population responses for downstream read-out by sculpting representational geometry in late-stage perceptual areas.


2019 ◽  
Vol 19 (10) ◽  
pp. 4c
Author(s):  
Kalanit Grill-Spector ◽  
Marisa Nordt ◽  
Vaidehi Natu ◽  
Jesse Gomez ◽  
Brianna Jeska ◽  
...  

2015 ◽  
Vol 15 (12) ◽  
pp. 753
Author(s):  
Kalanit Grill-Spector ◽  
Kevin Weiner ◽  
Nikolaus Kriegeskorte ◽  
Kendrick Kay

Sign in / Sign up

Export Citation Format

Share Document