Dynamic Domain Specificity In Human Ventral Temporal Cortex

2020 ◽  
Author(s):  
Brett B. Bankson ◽  
Matthew J. Boring ◽  
R. Mark Richardson ◽  
Avniel Singh Ghuman

ABSTRACTAn enduring neuroscientific debate concerns the extent to which neural representation is restricted to networks of patches specialized for particular domains of perceptual input (Kaniwsher et al., 1997; Livingstone et al., 2019), or distributed outside of these patches to broad areas of cortex as well (Haxby et al., 2001; Op de Beeck, 2008). A critical level for this debate is the localization of the neural representation of the identity of individual images, (Spiridon & Kanwisher, 2002) such as individual-level face or written word recognition. To address this debate, intracranial recordings from 489 electrodes throughout ventral temporal cortex across 17 human subjects were used to assess the spatiotemporal dynamics of individual word and face processing within and outside cortical patches strongly selective for these categories of visual information. Individual faces and words were first represented primarily only in strongly selective patches and then represented in both strongly and weakly selective areas approximately 170 milliseconds later. Strongly and weakly selective areas contributed non-redundant information to the representation of individual images. These results can reconcile previous results endorsing disparate poles of the domain specificity debate by highlighting the temporally segregated contributions of different functionally defined cortical areas to individual level representations. Taken together, this work supports a dynamic model of neural representation characterized by successive domain-specific and distributed processing stages.SIGNIFICANCE STATEMENTThe visual processing system performs dynamic computations to differentiate visually similar forms, such as identifying individual words and faces. Previous models have localized these computations to 1) circumscribed, specialized portions of the brain, or 2) more distributed aspects of the brain. The current work combines machine learning analyses with human intracranial recordings to determine the neurodynamics of individual face and word processing in and outside of brain regions selective for these visual categories. The results suggest that individuation involves computations that occur first in primarily highly selective parts of the visual processing system, then later recruits highly and non-highly selective regions. These results mediate between extant models of neural specialization by suggesting a dynamic domain specificity model of visual processing.

2021 ◽  
Author(s):  
Yiyuan Zhang ◽  
Ke Zhou ◽  
Pinglei Bao ◽  
Jia Liu

To achieve the computational goal of rapidly recognizing miscellaneous objects in the environment despite large variations in their appearance, our mind represents objects in a high-dimensional object space to provide separable category information and enable the extraction of different kinds of information necessary for various levels of the visual processing. To implement this abstract and complex object space, the ventral temporal cortex (VTC) develops different object-selective regions with a certain topological organization as the physical substrate. However, the principle that governs the topological organization of object selectivities in the VTC remains unclear. Here, equipped with the wiring cost minimization principle constrained by the wiring length of neurons in the human temporal lobe, we constructed a hybrid self-organizing map (SOM) model as an artificial VTC (VTC-SOM) to explain how the abstract and complex object space is faithfully implemented in the brain. In two in silico experiments with the empirical brain imaging and single-unit data, our VTC-SOM predicted the topological structure of fine-scale functional regions (face-, object-, body-, and place-selective regions) and the boundary (i.e., middle Fusiform Sulcus) in large-scale abstract functional maps (animate vs. inanimate, real-word large-size vs. small-size, central vs. peripheral), with no significant loss in functionality (e.g., categorical selectivity, a hierarchy of view-invariant representations). These findings illustrated that the simple principle utilized in our model, rather than multiple hypotheses such as temporal associations, conceptual knowledge, and computational demands together, was apparently sufficient to determine the topological organization of object-selectivities in the VTC. In this way, the high-dimensional object space is implemented in a two-dimensional cortical surface of the brain faithfully.


2020 ◽  
Vol 20 (11) ◽  
pp. 115
Author(s):  
Brett Bankson ◽  
Matthew Boring ◽  
R. Mark Richardson ◽  
Avniel Singh Ghuman

2013 ◽  
Vol 31 (2) ◽  
pp. 197-209 ◽  
Author(s):  
BEVIL R. CONWAY

AbstractExplanations for color phenomena are often sought in the retina, lateral geniculate nucleus, and V1, yet it is becoming increasingly clear that a complete account will take us further along the visual-processing pathway. Working out which areas are involved is not trivial. Responses to S-cone activation are often assumed to indicate that an area or neuron is involved in color perception. However, work tracing S-cone signals into extrastriate cortex has challenged this assumption: S-cone responses have been found in brain regions, such as the middle temporal (MT) motion area, not thought to play a major role in color perception. Here, we review the processing of S-cone signals across cortex and present original data on S-cone responses measured with fMRI in alert macaque, focusing on one area in which S-cone signals seem likely to contribute to color (V4/posterior inferior temporal cortex) and on one area in which S signals are unlikely to play a role in color (MT). We advance a hypothesis that the S-cone signals in color-computing areas are required to achieve a balanced neural representation of perceptual color space, whereas those in noncolor-areas provide a cue to illumination (not luminance) and confer sensitivity to the chromatic contrast generated by natural daylight (shadows, illuminated by ambient sky, surrounded by direct sunlight). This sensitivity would facilitate the extraction of shape-from-shadow signals to benefit global scene analysis and motion perception.


2018 ◽  
Author(s):  
Ceren Battal ◽  
Mohamed Rezk ◽  
Stefania Mattioni ◽  
Jyothirmayi Vadlamudi ◽  
Olivier Collignon

ABSTRACTThe ability to compute the location and direction of sounds is a crucial perceptual skill to efficiently interact with dynamic environments. How the human brain implements spatial hearing is however poorly understood. In our study, we used fMRI to characterize the brain activity of male and female humans listening to left, right, up and down moving as well as static sounds. Whole brain univariate results contrasting moving and static sounds varying in their location revealed a robust functional preference for auditory motion in bilateral human Planum Temporale (hPT). Using independently localized hPT, we show that this region contains information about auditory motion directions and, to a lesser extent, sound source locations. Moreover, hPT showed an axis of motion organization reminiscent of the functional organization of the middle-temporal cortex (hMT+/V5) for vision. Importantly, whereas motion direction and location rely on partially shared pattern geometries in hPT, as demonstrated by successful cross-condition decoding, the responses elicited by static and moving sounds were however significantly distinct. Altogether our results demonstrate that the hPT codes for auditory motion and location but that the underlying neural computation linked to motion processing is more reliable and partially distinct from the one supporting sound source location.SIGNIFICANCE STATEMENTIn comparison to what we know about visual motion, little is known about how the brain implements spatial hearing. Our study reveals that motion directions and sound source locations can be reliably decoded in the human Planum Temporale (hPT) and that they rely on partially shared pattern geometries. Our study therefore sheds important new lights on how computing the location or direction of sounds are implemented in the human auditory cortex by showing that those two computations rely on partially shared neural codes. Furthermore, our results show that the neural representation of moving sounds in hPT follows a “preferred axis of motion” organization, reminiscent of the coding mechanisms typically observed in the occipital hMT+/V5 region for computing visual motion.


2021 ◽  
Author(s):  
Arielle S Keller ◽  
Akshay V Jagadeesh ◽  
Lior Bugatus ◽  
Leanne M Williams ◽  
Kalanit Grill-Spector

How does attention enhance neural representations of goal-relevant stimuli while suppressing representations of ignored stimuli across regions of the brain? While prior studies have shown that attention enhances visual responses, we lack a cohesive understanding of how selective attention modulates visual representations across the brain. Here, we used functional magnetic resonance imaging (fMRI) while participants performed a selective attention task on superimposed stimuli from multiple categories and used a data-driven approach to test how attention affects both decodability of category information and residual correlations (after regressing out stimulus-driven variance) with category-selective regions of ventral temporal cortex (VTC). Our data reveal three main findings. First, when two objects are simultaneously viewed, the category of the attended object can be decoded more readily than the category of the ignored object, with the greatest attentional enhancements observed in occipital and temporal lobes. Second, after accounting for the response to the stimulus, the correlation in the residual brain activity between a cortical region and a category-selective region of VTC was elevated when that region's preferred category was attended vs. ignored, and more so in the right occipital, parietal, and frontal cortices. Third, we found that the stronger the residual correlations between a given region of cortex and VTC, the better visual category information could be decoded from that region. These findings suggest that heightened residual correlations by selective attention may reflect the sharing of information between sensory regions and higher-order cortical regions to provide attentional enhancement of goal-relevant information.


2020 ◽  
Vol 31 (1) ◽  
pp. 603-619 ◽  
Author(s):  
Mona Rosenke ◽  
Rick van Hoof ◽  
Job van den Hurk ◽  
Kalanit Grill-Spector ◽  
Rainer Goebel

Abstract Human visual cortex contains many retinotopic and category-specific regions. These brain regions have been the focus of a large body of functional magnetic resonance imaging research, significantly expanding our understanding of visual processing. As studying these regions requires accurate localization of their cortical location, researchers perform functional localizer scans to identify these regions in each individual. However, it is not always possible to conduct these localizer scans. Here, we developed and validated a functional region of interest (ROI) atlas of early visual and category-selective regions in human ventral and lateral occipito-temporal cortex. Results show that for the majority of functionally defined ROIs, cortex-based alignment results in lower between-subject variability compared to nonlinear volumetric alignment. Furthermore, we demonstrate that 1) the atlas accurately predicts the location of an independent dataset of ventral temporal cortex ROIs and other atlases of place selectivity, motion selectivity, and retinotopy. Next, 2) we show that the majority of voxel within our atlas is responding mostly to the labeled category in a left-out subject cross-validation, demonstrating the utility of this atlas. The functional atlas is publicly available (download.brainvoyager.com/data/visfAtlas.zip) and can help identify the location of these regions in healthy subjects as well as populations (e.g., blind people, infants) in which functional localizers cannot be run.


2009 ◽  
Vol 21 (7) ◽  
pp. 1447-1460 ◽  
Author(s):  
Julie A. Brefczynski-Lewis ◽  
Ritobrato Datta ◽  
James W. Lewis ◽  
Edgar A. DeYoe

Previously, we and others have shown that attention can enhance visual processing in a spatially specific manner that is retinotopically mapped in the occipital cortex. However, it is difficult to appreciate the functional significance of the spatial pattern of cortical activation just by examining the brain maps. In this study, we visualize the neural representation of the “spotlight” of attention using a back-projection of attention-related brain activation onto a diagram of the visual field. In the two main experiments, we examine the topography of attentional activation in the occipital and parietal cortices. In retinotopic areas, attentional enhancement is strongest at the locations of the attended target, but also spreads to nearby locations and even weakly to restricted locations in the opposite visual field. The dispersion of attentional effects around an attended site increases with the eccentricity of the target in a manner that roughly corresponds to a constant area of spread within the cortex. When averaged across multiple observers, these patterns appear consistent with a gradient model of spatial attention. However, individual observers exhibit complex variations that are unique but reproducible. Overall, these results suggest that the topography of visual attention for each individual is composed of a common theme plus a personal variation that may reflect their own unique “attentional style.”


NeuroImage ◽  
2022 ◽  
pp. 118900
Author(s):  
Arielle S. Keller ◽  
Akshay Jagadeesh ◽  
Lior Bugatus ◽  
Leanne M. Williams ◽  
Kalanit Grill-Spector

2018 ◽  
Author(s):  
Tijl Grootswagers ◽  
Radoslaw M. Cichy ◽  
Thomas A. Carlson

AbstractMultivariate decoding methods applied to neuroimaging data have become the standard in cognitive neuroscience for unravelling statistical dependencies between brain activation patterns and experimental conditions. The current challenge is to demonstrate that information decoded as such by the experimenter is in fact used by the brain itself to guide behaviour. Here we demonstrate a promising approach to do so in the context of neural activation during object perception and categorisation behaviour. We first localised decodable information about visual objects in the human brain using a spatially-unbiased multivariate decoding analysis. We then related brain activation patterns to behaviour using a machine-learning based extension of signal detection theory. We show that while there is decodable information about visual category throughout the visual brain, only a subset of those representations predicted categorisation behaviour, located mainly in anterior ventral temporal cortex. Our results have important implications for the interpretation of neuroimaging studies, highlight the importance of relating decoding results to behaviour, and suggest a suitable methodology towards this aim.


2020 ◽  
Vol 1 (1) ◽  
Author(s):  
Ali Ghazizadeh ◽  
Mohammad Amin Fakharian ◽  
Arash Amini ◽  
Whitney Griggs ◽  
David A Leopold ◽  
...  

Abstract Novel and valuable objects are motivationally attractive for animals including primates. However, little is known about how novelty and value processing is organized across the brain. We used fMRI in macaques to map brain responses to visual fractal patterns varying in either novelty or value dimensions and compared the results with the structure of functionally connected brain networks determined at rest. The results show that different brain networks possess unique combinations of novelty and value coding. One network identified in the ventral temporal cortex preferentially encoded object novelty, whereas another in the parietal cortex encoded the learned value. A third network, broadly composed of temporal and prefrontal areas (TP network), along with functionally connected portions of the striatum, amygdala, and claustrum, encoded both dimensions with similar activation dynamics. Our results support the emergence of a common currency signal in the TP network that may underlie the common attitudes toward novel and valuable objects.


Sign in / Sign up

Export Citation Format

Share Document