scholarly journals Decoding Multivoxel Representations of Affective Scenes in Retinotopic Visual Cortex

2020 ◽  
Author(s):  
Ke Bo ◽  
Siyang Yin ◽  
Yuelu Liu ◽  
Zhenhong Hu ◽  
Sreenivasan Meyyapan ◽  
...  

AbstractThe perception of opportunities and threats in complex scenes represents one of the main functions of the human visual system. In the laboratory, its neurophysiological basis is often studied by having observers view pictures varying in affective content. This body of work has consistently shown that viewing emotionally engaging, compared to neutral, pictures (1) heightens blood flow in limbic structures and frontoparietal cortex, as well as in anterior ventral and dorsal visual cortex, and (2) prompts an increase in the late positive event-related potential (LPP), a scalp-recorded and time-sensitive index of engagement within the network of aforementioned neural structures. The role of retinotopic visual cortex in this process has, however, been contentious, with competing theoretical notions predicting the presence versus absence of emotion-specific signals in retinotopic visual areas. The present study used multimodal neuroimaging and machine learning to address this question by examining the large-scale neural representations of affective pictures. Recording EEG and fMRI simultaneously while observers viewed pleasant, unpleasant, and neutral affective pictures, and applying multivariate pattern analysis to single-trial BOLD activities in retinotopic visual cortex, we identified three robust findings: First, unpleasant-versus-neutral decoding accuracy, as well as pleasant-versus-neutral decoding accuracy, were well above chance level in all retinotopic visual areas, including primary visual cortex. Second, the decoding accuracy in ventral visual cortex, but not in early visual cortex or dorsal visual cortex, was significantly correlated with LPP amplitude. Third, effective connectivity from amygdala to ventral visual cortex predicted unpleasant-versus-neutral decoding accuracy, and effective connectivity from ventral frontal cortex to ventral visual cortex predicted pleasant-versus-neutral decoding accuracy. These results suggest that affective pictures evoked valence-specific multivoxel neural representations in retinotopic visual cortex and that these multivoxel representations were influenced by reentry signals from limbic and frontal brain regions.

2021 ◽  
Author(s):  
Ke Bo ◽  
Siyang Yin ◽  
Yuelu Liu ◽  
Zhenhong Hu ◽  
Sreenivasan Meyyappan ◽  
...  

Abstract The perception of opportunities and threats in complex visual scenes represents one of the main functions of the human visual system. The underlying neurophysiology is often studied by having observers view pictures varying in affective content. It has been shown that viewing emotionally engaging, compared with neutral, pictures (1) heightens blood flow in limbic, frontoparietal, and anterior visual structures and (2) enhances the late positive event-related potential (LPP). The role of retinotopic visual cortex in this process has, however, been contentious, with competing theories predicting the presence versus absence of emotion-specific signals in retinotopic visual areas. Recording simultaneous electroencephalography–functional magnetic resonance imaging while observers viewed pleasant, unpleasant, and neutral affective pictures, and applying multivariate pattern analysis, we found that (1) unpleasant versus neutral and pleasant versus neutral decoding accuracy were well above chance level in retinotopic visual areas, (2) decoding accuracy in ventral visual cortex (VVC), but not in early or dorsal visual cortex, was correlated with LPP, and (3) effective connectivity from amygdala to VVC predicted unpleasant versus neutral decoding accuracy, whereas effective connectivity from ventral frontal cortex to VVC predicted pleasant versus neutral decoding accuracy. These results suggest that affective scenes evoke valence-specific neural representations in retinotopic visual cortex and that these representations are influenced by reentry signals from anterior brain regions.


2015 ◽  
Vol 32 ◽  
Author(s):  
M.J. ARCARO ◽  
S. KASTNER

AbstractAreas V3 and V4 are commonly thought of as individual entities in the primate visual system, based on definition criteria such as their representation of visual space, connectivity, functional response properties, and relative anatomical location in cortex. Yet, large-scale functional and anatomical organization patterns not only emphasize distinctions within each area, but also links across visual cortex. Specifically, the visuotopic organization of V3 and V4 appears to be part of a larger, supra-areal organization, clustering these areas with early visual areas V1 and V2. In addition, connectivity patterns across visual cortex appear to vary within these areas as a function of their supra-areal eccentricity organization. This complicates the traditional view of these regions as individual functional “areas.” Here, we will review the criteria for defining areas V3 and V4 and will discuss functional and anatomical studies in humans and monkeys that emphasize the integration of individual visual areas into broad, supra-areal clusters that work in concert for a common computational goal. Specifically, we propose that the visuotopic organization of V3 and V4, which provides the criteria for differentiating these areas, also unifies these areas into the supra-areal organization of early visual cortex. We propose that V3 and V4 play a critical role in this supra-areal organization by filtering information about the visual environment along parallel pathways across higher-order cortex.


2019 ◽  
Author(s):  
Sonia Poltoratski ◽  
Frank Tong

AbstractThe detection and segmentation of meaningful figures from their background is a core function of vision. While work in non-human primates has implicated early visual mechanisms in this figure-ground modulation, neuroimaging in humans has instead largely ascribed the processing of figures and objects to higher stages of the visual hierarchy. Here, we used high-field fMRI at 7Tesla to measure BOLD responses to task-irrelevant orientation-defined figures in human early visual cortex, and employed a novel population receptive field (pRF) mapping-based approach to resolve the spatial profiles of two constituent mechanisms of figure-ground modulation: a local boundary response, and a further enhancement spanning the full extent of the figure region that is driven by global differences in features. Reconstructing the distinct spatial profiles of these effects reveals that figure enhancement modulates responses in human early visual cortex in a manner consistent with a mechanism of automatic, contextually-driven feedback from higher visual areas.Significance StatementA core function of the visual system is to parse complex 2D input into meaningful figures. We do so constantly and seamlessly, both by processing information about visible edges and by analyzing large-scale differences between figures and background. While influential neurophysiology work has characterized an intriguing mechanism that enhances V1 responses to perceptual figures, we have a poor understanding of how the early visual system contributes to figure-ground processing in humans. Here, we use advanced computational analysis methods and high-field human fMRI data to resolve the distinct spatial profiles of local edge and global figure enhancement in the early visual system (V1 and LGN); the latter is distinct and consistent a mechanism of automatic, stimulus-driven feedback from higher-level visual areas.


2015 ◽  
Vol 113 (9) ◽  
pp. 3159-3171 ◽  
Author(s):  
Caroline D. B. Luft ◽  
Alan Meeson ◽  
Andrew E. Welchman ◽  
Zoe Kourtzi

Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our ability to predict sensory events. We tested the participants' ability to predict the orientation of a test stimulus after exposure to sequences of leftward- or rightward-oriented gratings. Using fMRI decoding, we identified brain patterns related to the observers' visual predictions rather than stimulus-driven activity. Decoding of predicted orientations following structured sequences was enhanced after training, while decoding of cued orientations following exposure to random sequences did not change. These predictive representations appear to be driven by the same large-scale neural populations that encode actual stimulus orientation and to be specific to the learned sequence structure. Thus our findings provide evidence that learning temporal structures supports our ability to predict future events by reactivating selective sensory representations as early as in primary visual cortex.


2020 ◽  
Author(s):  
Munendo Fujimichi ◽  
Hiroki Yamamoto ◽  
Jun Saiki

Are visual representations in the human early visual cortex necessary for visual working memory (VWM)? Previous studies suggest that VWM is underpinned by distributed representations across several brain regions, including the early visual cortex. Notably, in these studies, participants had to memorize images under consistent visual conditions. However, in our daily lives, we must retain the essential visual properties of objects despite changes in illumination or viewpoint. The role of brain regions—particularly the early visual cortices—in these situations remains unclear. The present study investigated whether the early visual cortex was essential for achieving stable VWM. Focusing on VWM for object surface properties, we conducted fMRI experiments while male and female participants performed a delayed roughness discrimination task in which sample and probe spheres were presented under varying illumination. By applying multi-voxel pattern analysis to brain activity in regions of interest, we found that the ventral visual cortex and intraparietal sulcus were involved in roughness VWM under changing illumination conditions. In contrast, VWM was not supported as robustly by the early visual cortex. These findings show that visual representations in the early visual cortex alone are insufficient for the robust roughness VWM representation required during changes in illumination.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Elia Benhamou ◽  
Charles R. Marshall ◽  
Lucy L. Russell ◽  
Chris J. D. Hardy ◽  
Rebecca L. Bond ◽  
...  

Abstract The selective destruction of large-scale brain networks by pathogenic protein spread is a ubiquitous theme in neurodegenerative disease. Characterising the circuit architecture of these diseases could illuminate both their pathophysiology and the computational architecture of the cognitive processes they target. However, this is challenging using standard neuroimaging techniques. Here we addressed this issue using a novel technique—spectral dynamic causal modelling—that estimates the effective connectivity between brain regions from resting-state fMRI data. We studied patients with semantic dementia—the paradigmatic disorder of the brain system mediating world knowledge—relative to healthy older individuals. We assessed how the effective connectivity of the semantic appraisal network targeted by this disease was modulated by pathogenic protein deposition and by two key phenotypic factors, semantic impairment and behavioural disinhibition. The presence of pathogenic protein in SD weakened the normal inhibitory self-coupling of network hubs in both antero-mesial temporal lobes, with development of an abnormal excitatory fronto-temporal projection in the left cerebral hemisphere. Semantic impairment and social disinhibition were linked to a similar but more extensive profile of abnormally attenuated inhibitory self-coupling within temporal lobe regions and excitatory projections between temporal and inferior frontal regions. Our findings demonstrate that population-level dynamic causal modelling can disclose a core pathophysiological feature of proteinopathic network architecture—attenuation of inhibitory connectivity—and the key elements of distributed neuronal processing that underwrite semantic memory.


2020 ◽  
Author(s):  
Jakub Kopal ◽  
Jaroslav Hlinka ◽  
Elodie Despouy ◽  
Luc Valton ◽  
Marie Denuelle ◽  
...  

Recognition memory is the ability to recognize previously encountered events, objects, or people. It is characterized by its robustness and rapidness. Even this relatively simple ability requires the coordinated activity of a surprisingly large number of brain regions. These spatially distributed, but functionally linked regions are interconnected into large-scale networks. Understanding memory requires an examination of the involvement of these networks and the interactions between different regions while memory processes unfold. However, little is known about the dynamical organization of large-scale networks during the early phases of recognition memory. We recorded intracranial EEG, which affords high temporal and spatial resolution, while epileptic subjects performed a visual recognition memory task. We analyzed dynamic functional and effective connectivity as well as network properties. Various networks were identified, each with its specific characteristics regarding information flow (feedforward or feedback), dynamics, topology, and stability. The first network mainly involved the right visual ventral stream and bilateral frontal regions. It was characterized by early predominant feedforward activity, modular topology, and high stability. It was followed by the involvement of a second network, mainly in the left hemisphere, but notably also involving the right hippocampus, characterized by later feedback activity, integrated topology, and lower stability. The transition between networks was associated with a change in network topology. Overall, these results confirm that several large-scale brain networks, each with specific properties and temporal manifestation, are involved during recognition memory. Ultimately, understanding how the brain dynamically faces rapid changes in cognitive demand is vital to our comprehension of the neural basis of cognition.


2019 ◽  
Author(s):  
Kamila M. Jozwik ◽  
Michael Lee ◽  
Tiago Marques ◽  
Martin Schrimpf ◽  
Pouya Bashivan

Image features computed by specific convolutional artificial neural networks (ANNs) can be used to make state-of-the-art predictions of primate ventral stream responses to visual stimuli.However, in addition to selecting the specific ANN and layer that is used, the modeler makes other choices in preprocessing the stimulus image and generating brain predictions from ANN features. The effect of these choices on brain predictivity is currently underexplored.Here, we directly evaluated many of these choices by performing a grid search over network architectures, layers, image preprocessing strategies, feature pooling mechanisms, and the use of dimensionality reduction. Our goal was to identify model configurations that produce responses to visual stimuli that are most similar to the human neural representations, as measured by human fMRI and MEG responses. In total, we evaluated more than 140,338 model configurations. We found that specific configurations of CORnet-S best predicted fMRI responses in early visual cortex, and CORnet-R and SqueezeNet models best predicted fMRI responses in inferior temporal cortex. We found specific configurations of VGG-16 and CORnet-S models that best predicted the MEG responses.We also observed that downsizing input images to ~50-75% of the input tensor size lead to better performing models compared to no downsizing (the default choice in most brain models for vision). Taken together, we present evidence that brain predictivity is sensitive not only to which ANN architecture and layer is used, but choices in image preprocessing and feature postprocessing, and these choices should be further explored.


2020 ◽  
Vol 123 (2) ◽  
pp. 773-785 ◽  
Author(s):  
Sara Aghajari ◽  
Louis N. Vinke ◽  
Sam Ling

Neurons within early visual cortex are selective for basic image statistics, including spatial frequency. However, these neurons are thought to act as band-pass filters, with the window of spatial frequency sensitivity varying across the visual field and across visual areas. Although a handful of previous functional (f)MRI studies have examined human spatial frequency sensitivity using conventional designs and analysis methods, these measurements are time consuming and fail to capture the precision of spatial frequency tuning (bandwidth). In this study, we introduce a model-driven approach to fMRI analyses that allows for fast and efficient estimation of population spatial frequency tuning (pSFT) for individual voxels. Blood oxygen level-dependent (BOLD) responses within early visual cortex were acquired while subjects viewed a series of full-field stimuli that swept through a large range of spatial frequency content. Each stimulus was generated by band-pass filtering white noise with a central frequency that changed periodically between a minimum of 0.5 cycles/degree (cpd) and a maximum of 12 cpd. To estimate the underlying frequency tuning of each voxel, we assumed a log-Gaussian pSFT and optimized the parameters of this function by comparing our model output against the measured BOLD time series. Consistent with previous studies, our results show that an increase in eccentricity within each visual area is accompanied by a drop in the peak spatial frequency of the pSFT. Moreover, we found that pSFT bandwidth depends on eccentricity and is correlated with the pSFT peak; populations with lower peaks possess broader bandwidths in logarithmic scale, whereas in linear scale this relationship is reversed. NEW & NOTEWORTHY Spatial frequency selectivity is a hallmark property of early visuocortical neurons, and mapping these sensitivities gives us crucial insight into the hierarchical organization of information within visual areas. Due to technical obstacles, we lack a comprehensive picture of the properties of this sensitivity in humans. Here, we introduce a new method, coined population spatial frequency tuning mapping, which circumvents the limitations of the conventional neuroimaging methods, yielding a fuller visuocortical map of spatial frequency sensitivity.


2018 ◽  
Vol 30 (9) ◽  
pp. 1281-1297 ◽  
Author(s):  
Alexa Tompary ◽  
Naseem Al-Aidroos ◽  
Nicholas B. Turk-Browne

Top–down attention prioritizes the processing of goal-relevant information throughout visual cortex based on where that information is found in space and what it looks like. Whereas attentional goals often have both spatial and featural components, most research on the neural basis of attention has examined these components separately. Here we investigated how these attentional components are integrated by examining the attentional modulation of functional connectivity between visual areas with different selectivity. Specifically, we used fMRI to measure temporal correlations between spatially selective regions of early visual cortex and category-selective regions in ventral temporal cortex while participants performed a task that benefitted from both spatial and categorical attention. We found that categorical attention modulated the connectivity of category-selective areas, but only with retinotopic areas that coded for the spatially attended location. Similarly, spatial attention modulated the connectivity of retinotopic areas only with the areas coding for the attended category. This pattern of results suggests that attentional modulation of connectivity is driven both by spatial selection and featural biases. Combined with exploratory analyses of frontoparietal areas that track these changes in connectivity among visual areas, this study begins to shed light on how different components of attention are integrated in support of more complex behavioral goals.


Sign in / Sign up

Export Citation Format

Share Document