ventral temporal cortex
Recently Published Documents


TOTAL DOCUMENTS

139
(FIVE YEARS 65)

H-INDEX

22
(FIVE YEARS 3)

NeuroImage ◽  
2022 ◽  
pp. 118900
Author(s):  
Arielle S. Keller ◽  
Akshay Jagadeesh ◽  
Lior Bugatus ◽  
Leanne M. Williams ◽  
Kalanit Grill-Spector

2021 ◽  
Author(s):  
Arielle S Keller ◽  
Akshay V Jagadeesh ◽  
Lior Bugatus ◽  
Leanne M Williams ◽  
Kalanit Grill-Spector

How does attention enhance neural representations of goal-relevant stimuli while suppressing representations of ignored stimuli across regions of the brain? While prior studies have shown that attention enhances visual responses, we lack a cohesive understanding of how selective attention modulates visual representations across the brain. Here, we used functional magnetic resonance imaging (fMRI) while participants performed a selective attention task on superimposed stimuli from multiple categories and used a data-driven approach to test how attention affects both decodability of category information and residual correlations (after regressing out stimulus-driven variance) with category-selective regions of ventral temporal cortex (VTC). Our data reveal three main findings. First, when two objects are simultaneously viewed, the category of the attended object can be decoded more readily than the category of the ignored object, with the greatest attentional enhancements observed in occipital and temporal lobes. Second, after accounting for the response to the stimulus, the correlation in the residual brain activity between a cortical region and a category-selective region of VTC was elevated when that region's preferred category was attended vs. ignored, and more so in the right occipital, parietal, and frontal cortices. Third, we found that the stronger the residual correlations between a given region of cortex and VTC, the better visual category information could be decoded from that region. These findings suggest that heightened residual correlations by selective attention may reflect the sharing of information between sensory regions and higher-order cortical regions to provide attentional enhancement of goal-relevant information.


2021 ◽  
Author(s):  
Heather L. Kosakowski ◽  
Michael A. Cohen ◽  
Lyneé Herrara ◽  
Isabel Nichoson ◽  
Nancy Kanwisher ◽  
...  

AbstractFaces are a rich source of social information. How does the infant brain develop the ability to recognize faces and identify potential social partners? We collected functional magnetic neuroimaging (fMRI) data from 49 awake human infants (aged 2.5-9.7 months) while they watched movies of faces, bodies, objects, and scenes. Face-selective responses were observed not only in ventral temporal cortex (VTC) but also in superior temporal sulcus (STS), and medial prefrontal cortex (MPFC). Face responses were also observed (but not fully selective) in the amygdala and thalamus. We find no evidence that face-selective responses develop in visual perception regions (VTC) prior to higher order social perception (STS) or social evaluation (MPFC) regions. We suggest that face-selective responses may develop in parallel across multiple cortical regions. Infants’ brains could thus simultaneously process faces both as a privileged category of visual images, and as potential social partners.


Author(s):  
Edward H. Silson ◽  
Iris I. A. Groen ◽  
Chris I. Baker

AbstractHuman visual cortex is organised broadly according to two major principles: retinotopy (the spatial mapping of the retina in cortex) and category-selectivity (preferential responses to specific categories of stimuli). Historically, these principles were considered anatomically separate, with retinotopy restricted to the occipital cortex and category-selectivity emerging in the lateral-occipital and ventral-temporal cortex. However, recent studies show that category-selective regions exhibit systematic retinotopic biases, for example exhibiting stronger activation for stimuli presented in the contra- compared to the ipsilateral visual field. It is unclear, however, whether responses within category-selective regions are more strongly driven by retinotopic location or by category preference, and if there are systematic differences between category-selective regions in the relative strengths of these preferences. Here, we directly compare contralateral and category preferences by measuring fMRI responses to scene and face stimuli presented in the left or right visual field and computing two bias indices: a contralateral bias (response to the contralateral minus ipsilateral visual field) and a face/scene bias (preferred response to scenes compared to faces, or vice versa). We compare these biases within and between scene- and face-selective regions and across the lateral and ventral surfaces of the visual cortex more broadly. We find an interaction between surface and bias: lateral surface regions show a stronger contralateral than face/scene bias, whilst ventral surface regions show the opposite. These effects are robust across and within subjects, and appear to reflect large-scale, smoothly varying gradients. Together, these findings support distinct functional roles for the lateral and ventral visual cortex in terms of the relative importance of the spatial location of stimuli during visual information processing.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Timothy T Rogers ◽  
Christopher R Cox ◽  
Qihong Lu ◽  
Akihiro Shimotake ◽  
Takayuki Kikuch ◽  
...  

How does the human brain encode semantic information about objects? This paper reconciles two seemingly contradictory views. The first proposes that local neural populations independently encode semantic features; the second, that semantic representations arise as a dynamic distributed code that changes radically with stimulus processing. Combining simulations with a well-known neural network model of semantic memory, multivariate pattern classification, and human electrocorticography, we find that both views are partially correct: information about the animacy of a depicted stimulus is distributed across ventral temporal cortex in a dynamic code possessing feature-like elements posteriorly but with elements that change rapidly and nonlinearly in anterior regions. This pattern is consistent with the view that anterior temporal lobes serve as a deep cross-modal ‘hub’ in an interactive semantic network, and more generally suggests that tertiary association cortices may adopt dynamic distributed codes difficult to detect with common brain imaging methods.


2021 ◽  
Author(s):  
Taicheng Huang ◽  
Yiying Song ◽  
Jia Liu

Our mind can represent various objects from the physical world metaphorically into an abstract and complex high-dimensional object space, with a finite number of orthogonal axes encoding critical object features. Previous fMRI studies have shown that the middle fusiform sulcus in the ventral temporal cortex separates the real-world small-size map from the large-size map. Here we asked whether the feature of objects' real-world size constructed an axis of object space with deep convolutional neural networks (DCNNs) based on three criteria of sensitivity, independence and necessity that are impractical to be examined altogether with traditional approaches. A principal component analysis on features extracted by the DCNNs showed that objects' real-world size was encoded by an independent component, and the removal of this component significantly impaired DCNN's performance in recognizing objects. By manipulating stimuli, we found that the shape and texture of objects, rather than retina size, co-occurrence and task demands, accounted for the representation of the real-world size in the DCNNs. A follow-up fMRI experiment on humans further demonstrated that the shape, but not the texture, was used to infer the real-world size of objects in humans. In short, with both computational modeling and empirical human experiments, our study provided the first evidence supporting the feature of objects' real-world size as an axis of object space, and devised a novel paradigm for future exploring the structure of object space.


2021 ◽  
Vol 21 (9) ◽  
pp. 2881
Author(s):  
Brett Bankson ◽  
Michael Ward ◽  
Edward Silson ◽  
Chris Baker ◽  
R. Mark Richardson ◽  
...  

2021 ◽  
Author(s):  
Yiyuan Zhang ◽  
Ke Zhou ◽  
Pinglei Bao ◽  
Jia Liu

To achieve the computational goal of rapidly recognizing miscellaneous objects in the environment despite large variations in their appearance, our mind represents objects in a high-dimensional object space to provide separable category information and enable the extraction of different kinds of information necessary for various levels of the visual processing. To implement this abstract and complex object space, the ventral temporal cortex (VTC) develops different object-selective regions with a certain topological organization as the physical substrate. However, the principle that governs the topological organization of object selectivities in the VTC remains unclear. Here, equipped with the wiring cost minimization principle constrained by the wiring length of neurons in the human temporal lobe, we constructed a hybrid self-organizing map (SOM) model as an artificial VTC (VTC-SOM) to explain how the abstract and complex object space is faithfully implemented in the brain. In two in silico experiments with the empirical brain imaging and single-unit data, our VTC-SOM predicted the topological structure of fine-scale functional regions (face-, object-, body-, and place-selective regions) and the boundary (i.e., middle Fusiform Sulcus) in large-scale abstract functional maps (animate vs. inanimate, real-word large-size vs. small-size, central vs. peripheral), with no significant loss in functionality (e.g., categorical selectivity, a hierarchy of view-invariant representations). These findings illustrated that the simple principle utilized in our model, rather than multiple hypotheses such as temporal associations, conceptual knowledge, and computational demands together, was apparently sufficient to determine the topological organization of object-selectivities in the VTC. In this way, the high-dimensional object space is implemented in a two-dimensional cortical surface of the brain faithfully.


2021 ◽  
Author(s):  
Jin Li ◽  
Evelina Fedorenko ◽  
Zeynep M. Saygin

The visual word form area (VWFA) is an experience-dependent brain region in the left ventral temporal cortex of literate adults that responds selectively to visual words. Why does it emerge in this stereotyped location? Past research has shown that the VWFA is preferentially connected to the left-lateralized frontotemporal language network. However, it remains unclear whether the presence of a typical language network and its connections with ventral temporal cortex (VTC) are critical for the VWFA's emergence, and whether alternative functional architectures may support reading ability. We explored these questions in an individual (EG) born without the left superior temporal lobe but exhibiting normal reading ability. Using fMRI, we recorded brain activation to visual words, objects, faces, and scrambled words in EG and neurotypical controls. We did not observe word selectivity either in EG's right homotope of the VWFA (rVWFA)—the most expected location given that EG's language network is right-lateralized—or in her spared left VWFA (lVWFA), in the presence of typical face selectivity in both the right and left fusiform face area (rFFA, lFFA). Interestingly, multivariate pattern analyses revealed voxels in EG's rVWFA and lVWFA that showed 1) higher within- than between- category correlations for words (e.g., Words-Words>Words-Faces), and 2) higher within-category correlations for words than other categories (e.g., Words-Words>Faces-Faces). These results suggest that a typical left-hemisphere language network may be necessary for the emergence of focal word selectivity within ventral temporal cortex, and that orthographic processing may depend on a distributed neural code, which appears capable of supporting reading ability.


2021 ◽  
Author(s):  
MATTHEW D. LIEBERMAN

Although subjective construal (i.e. our personal understanding of situations and the people and objects within them) has been an enduring topic in social psychology, its underlying mechanisms have never been fully explored. This review presents a model of subjective construals as a kind of seeing (i.e. pre-reflective processes associated with effortless meaning making). Three distinct forms of ‘seeing’ (visual, semantic, and psychological) are discussed to highlight the breadth of these construals. The CEEing Model characterizes these distinct domains of pre-reflective construals as all being Coherent Effortless Experiences. Neural evidence is then reviewed suggesting that a variety of processes that possess the core CEEing characteristics across visual, semantic, and psychological domains can be localized to lateral posterior parietal cortex, lateral posterior temporal cortex, and ventral temporal cortex in an area dubbed gestalt cortex. The link between subjective construals and gestalt cortex is further strengthened by evidence showing that when people have similar subjective construals (i.e. they see things similarly) they show greater neural synchrony (i.e. correlated neural fluctuations over time) with each other in gestalt cortex. The fact that the act of CEEing tends to inhibit alternative construals is discussed as one of multiple reasons for why we fail to appreciate the idiosyncratic nature of our pre-reflective construals, leading to naïve realism and other conflict-inducing outcomes.


Sign in / Sign up

Export Citation Format

Share Document