scholarly journals Resolving the spatial profile of figure enhancement in human V1 through population receptive field modeling

2019 ◽  
Author(s):  
Sonia Poltoratski ◽  
Frank Tong

AbstractThe detection and segmentation of meaningful figures from their background is a core function of vision. While work in non-human primates has implicated early visual mechanisms in this figure-ground modulation, neuroimaging in humans has instead largely ascribed the processing of figures and objects to higher stages of the visual hierarchy. Here, we used high-field fMRI at 7Tesla to measure BOLD responses to task-irrelevant orientation-defined figures in human early visual cortex, and employed a novel population receptive field (pRF) mapping-based approach to resolve the spatial profiles of two constituent mechanisms of figure-ground modulation: a local boundary response, and a further enhancement spanning the full extent of the figure region that is driven by global differences in features. Reconstructing the distinct spatial profiles of these effects reveals that figure enhancement modulates responses in human early visual cortex in a manner consistent with a mechanism of automatic, contextually-driven feedback from higher visual areas.Significance StatementA core function of the visual system is to parse complex 2D input into meaningful figures. We do so constantly and seamlessly, both by processing information about visible edges and by analyzing large-scale differences between figures and background. While influential neurophysiology work has characterized an intriguing mechanism that enhances V1 responses to perceptual figures, we have a poor understanding of how the early visual system contributes to figure-ground processing in humans. Here, we use advanced computational analysis methods and high-field human fMRI data to resolve the distinct spatial profiles of local edge and global figure enhancement in the early visual system (V1 and LGN); the latter is distinct and consistent a mechanism of automatic, stimulus-driven feedback from higher-level visual areas.

2015 ◽  
Vol 32 ◽  
Author(s):  
M.J. ARCARO ◽  
S. KASTNER

AbstractAreas V3 and V4 are commonly thought of as individual entities in the primate visual system, based on definition criteria such as their representation of visual space, connectivity, functional response properties, and relative anatomical location in cortex. Yet, large-scale functional and anatomical organization patterns not only emphasize distinctions within each area, but also links across visual cortex. Specifically, the visuotopic organization of V3 and V4 appears to be part of a larger, supra-areal organization, clustering these areas with early visual areas V1 and V2. In addition, connectivity patterns across visual cortex appear to vary within these areas as a function of their supra-areal eccentricity organization. This complicates the traditional view of these regions as individual functional “areas.” Here, we will review the criteria for defining areas V3 and V4 and will discuss functional and anatomical studies in humans and monkeys that emphasize the integration of individual visual areas into broad, supra-areal clusters that work in concert for a common computational goal. Specifically, we propose that the visuotopic organization of V3 and V4, which provides the criteria for differentiating these areas, also unifies these areas into the supra-areal organization of early visual cortex. We propose that V3 and V4 play a critical role in this supra-areal organization by filtering information about the visual environment along parallel pathways across higher-order cortex.


2020 ◽  
Author(s):  
Ke Bo ◽  
Siyang Yin ◽  
Yuelu Liu ◽  
Zhenhong Hu ◽  
Sreenivasan Meyyapan ◽  
...  

AbstractThe perception of opportunities and threats in complex scenes represents one of the main functions of the human visual system. In the laboratory, its neurophysiological basis is often studied by having observers view pictures varying in affective content. This body of work has consistently shown that viewing emotionally engaging, compared to neutral, pictures (1) heightens blood flow in limbic structures and frontoparietal cortex, as well as in anterior ventral and dorsal visual cortex, and (2) prompts an increase in the late positive event-related potential (LPP), a scalp-recorded and time-sensitive index of engagement within the network of aforementioned neural structures. The role of retinotopic visual cortex in this process has, however, been contentious, with competing theoretical notions predicting the presence versus absence of emotion-specific signals in retinotopic visual areas. The present study used multimodal neuroimaging and machine learning to address this question by examining the large-scale neural representations of affective pictures. Recording EEG and fMRI simultaneously while observers viewed pleasant, unpleasant, and neutral affective pictures, and applying multivariate pattern analysis to single-trial BOLD activities in retinotopic visual cortex, we identified three robust findings: First, unpleasant-versus-neutral decoding accuracy, as well as pleasant-versus-neutral decoding accuracy, were well above chance level in all retinotopic visual areas, including primary visual cortex. Second, the decoding accuracy in ventral visual cortex, but not in early visual cortex or dorsal visual cortex, was significantly correlated with LPP amplitude. Third, effective connectivity from amygdala to ventral visual cortex predicted unpleasant-versus-neutral decoding accuracy, and effective connectivity from ventral frontal cortex to ventral visual cortex predicted pleasant-versus-neutral decoding accuracy. These results suggest that affective pictures evoked valence-specific multivoxel neural representations in retinotopic visual cortex and that these multivoxel representations were influenced by reentry signals from limbic and frontal brain regions.


2015 ◽  
Vol 113 (9) ◽  
pp. 3159-3171 ◽  
Author(s):  
Caroline D. B. Luft ◽  
Alan Meeson ◽  
Andrew E. Welchman ◽  
Zoe Kourtzi

Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our ability to predict sensory events. We tested the participants' ability to predict the orientation of a test stimulus after exposure to sequences of leftward- or rightward-oriented gratings. Using fMRI decoding, we identified brain patterns related to the observers' visual predictions rather than stimulus-driven activity. Decoding of predicted orientations following structured sequences was enhanced after training, while decoding of cued orientations following exposure to random sequences did not change. These predictive representations appear to be driven by the same large-scale neural populations that encode actual stimulus orientation and to be specific to the learned sequence structure. Thus our findings provide evidence that learning temporal structures supports our ability to predict future events by reactivating selective sensory representations as early as in primary visual cortex.


2019 ◽  
Author(s):  
Kevin A. Murgas ◽  
Ashley M. Wilson ◽  
Valerie Michael ◽  
Lindsey L. Glickfeld

AbstractNeurons in the visual system integrate over a wide range of spatial scales. This diversity is thought to enable both local and global computations. To understand how spatial information is encoded across the mouse visual system, we use two-photon imaging to measure receptive fields in primary visual cortex (V1) and three downstream higher visual areas (HVAs): LM (lateromedial), AL (anterolateral) and PM (posteromedial). We find significantly larger receptive field sizes and less surround suppression in PM than in V1 or the other HVAs. Unlike other visual features studied in this system, specialization of spatial integration in PM cannot be explained by specific projections from V1 to the HVAs. Instead, our data suggests that distinct connectivity within PM may support the area’s unique ability to encode global features of the visual scene, whereas V1, LM and AL may be more specialized for processing local features.


2019 ◽  
Author(s):  
Ingo Marquardt ◽  
Peter De Weerd ◽  
Marian Schneider ◽  
Omer Faruk Gulban ◽  
Dimo Ivanov ◽  
...  

AbstractHuman visual surface perception has neural correlates in early visual cortex, but the extent to which feedback contributes to this activity is not well known. Feedback projections preferentially enter superficial and deep anatomical layers, while avoiding the middle layer, which provides a hypothesis for the cortical depth distribution of fMRI activity related to feedback in early visual cortex. Here, we presented human participants uniform surfaces on a dark, textured background. The grey surface in the left hemifield was either perceived as static or moving based on a manipulation in the right hemifield. Physically, the surface was identical in the left visual hemifield, so any difference in percept likely was related to feedback. Using ultra-high field fMRI, we report the first evidence for a depth distribution of activation in line with feedback during the (illusory) perception of surface motion. Our results fit with a signal re-entering in superficial depths of V1, followed by a feedforward sweep of the re-entered information through V2 and V3, as suggested by activity centred in the middle-depth levels of the latter areas. This positive modulation of the BOLD signal due to illusory surface motion was on top of a strong negative BOLD response in the cortical representation of the surface stimuli, which depended on the presence of texture in the background. Hence, the magnitude and sign of the BOLD response to the surface strongly depended on background properties, and was additionally modulated by the presence or absence of illusory motion perception in a manner compatible with feedback. In summary, the present study demonstrates the potential of depth resolved fMRI in tackling biomechanical questions on perception that so far were only within reach of invasive animal experimentation.


eLife ◽  
2015 ◽  
Vol 4 ◽  
Author(s):  
Michael J Arcaro ◽  
Christopher J Honey ◽  
Ryan EB Mruczek ◽  
Sabine Kastner ◽  
Uri Hasson

The human visual system can be divided into over two-dozen distinct areas, each of which contains a topographic map of the visual field. A fundamental question in vision neuroscience is how the visual system integrates information from the environment across different areas. Using neuroimaging, we investigated the spatial pattern of correlated BOLD signal across eight visual areas on data collected during rest conditions and during naturalistic movie viewing. The correlation pattern between areas reflected the underlying receptive field organization with higher correlations between cortical sites containing overlapping representations of visual space. In addition, the correlation pattern reflected the underlying widespread eccentricity organization of visual cortex, in which the highest correlations were observed for cortical sites with iso-eccentricity representations including regions with non-overlapping representations of visual space. This eccentricity-based correlation pattern appears to be part of an intrinsic functional architecture that supports the integration of information across functionally specialized visual areas.


2020 ◽  
Vol 123 (2) ◽  
pp. 773-785 ◽  
Author(s):  
Sara Aghajari ◽  
Louis N. Vinke ◽  
Sam Ling

Neurons within early visual cortex are selective for basic image statistics, including spatial frequency. However, these neurons are thought to act as band-pass filters, with the window of spatial frequency sensitivity varying across the visual field and across visual areas. Although a handful of previous functional (f)MRI studies have examined human spatial frequency sensitivity using conventional designs and analysis methods, these measurements are time consuming and fail to capture the precision of spatial frequency tuning (bandwidth). In this study, we introduce a model-driven approach to fMRI analyses that allows for fast and efficient estimation of population spatial frequency tuning (pSFT) for individual voxels. Blood oxygen level-dependent (BOLD) responses within early visual cortex were acquired while subjects viewed a series of full-field stimuli that swept through a large range of spatial frequency content. Each stimulus was generated by band-pass filtering white noise with a central frequency that changed periodically between a minimum of 0.5 cycles/degree (cpd) and a maximum of 12 cpd. To estimate the underlying frequency tuning of each voxel, we assumed a log-Gaussian pSFT and optimized the parameters of this function by comparing our model output against the measured BOLD time series. Consistent with previous studies, our results show that an increase in eccentricity within each visual area is accompanied by a drop in the peak spatial frequency of the pSFT. Moreover, we found that pSFT bandwidth depends on eccentricity and is correlated with the pSFT peak; populations with lower peaks possess broader bandwidths in logarithmic scale, whereas in linear scale this relationship is reversed. NEW & NOTEWORTHY Spatial frequency selectivity is a hallmark property of early visuocortical neurons, and mapping these sensitivities gives us crucial insight into the hierarchical organization of information within visual areas. Due to technical obstacles, we lack a comprehensive picture of the properties of this sensitivity in humans. Here, we introduce a new method, coined population spatial frequency tuning mapping, which circumvents the limitations of the conventional neuroimaging methods, yielding a fuller visuocortical map of spatial frequency sensitivity.


2020 ◽  
Vol 124 (1) ◽  
pp. 245-258 ◽  
Author(s):  
Miaomiao Jin ◽  
Lindsey L. Glickfeld

Rapid adaptation dynamically alters sensory signals to account for recent experience. To understand how adaptation affects sensory processing and perception, we must determine how it impacts the diverse set of cortical and subcortical areas along the hierarchy of the mouse visual system. We find that rapid adaptation strongly impacts neurons in primary visual cortex, the higher visual areas, and the colliculus, consistent with its profound effects on behavior.


2011 ◽  
Vol 106 (4) ◽  
pp. 1734-1746 ◽  
Author(s):  
Javier O. Garcia ◽  
Emily D. Grossman ◽  
Ramesh Srinivasan

Single pulses of transcranial magnetic stimulation (TMS) result in distal and long-lasting oscillations, a finding directly challenging the virtual lesion hypothesis. Previous research supporting this finding has primarily come from stimulation of the motor cortex. We have used single-pulse TMS with simultaneous EEG to target seven brain regions, six of which belong to the visual system [left and right primary visual area V1, motion-sensitive human middle temporal cortex, and a ventral temporal region], as determined with functional MRI-guided neuronavigation, and a vertex “control” site to measure the network effects of the TMS pulse. We found the TMS-evoked potential (TMS-EP) over visual cortex consists mostly of site-dependent theta- and alphaband oscillations. These site-dependent oscillations extended beyond the stimulation site to functionally connected cortical regions and correspond to time windows where the EEG responses maximally diverge (40, 200, and 385 ms). Correlations revealed two site-independent oscillations ∼350 ms after the TMS pulse: a theta-band oscillation carried by the frontal cortex, and an alpha-band oscillation over parietal and frontal cortical regions. A manipulation of stimulation intensity at one stimulation site (right hemisphere V1-V3) revealed sensitivity to the stimulation intensity at different regions of cortex, evidence of intensity tuning in regions distal to the site of stimulation. Together these results suggest that a TMS pulse applied to the visual cortex has a complex effect on brain function, engaging multiple brain networks functionally connected to the visual system with both invariant and site-specific spatiotemporal dynamics. With this characterization of TMS, we propose an alternative to the virtual lesion hypothesis. Rather than a technique that simulates lesions, we propose TMS generates natural brain signals and engages functional networks.


2018 ◽  
Vol 30 (9) ◽  
pp. 1281-1297 ◽  
Author(s):  
Alexa Tompary ◽  
Naseem Al-Aidroos ◽  
Nicholas B. Turk-Browne

Top–down attention prioritizes the processing of goal-relevant information throughout visual cortex based on where that information is found in space and what it looks like. Whereas attentional goals often have both spatial and featural components, most research on the neural basis of attention has examined these components separately. Here we investigated how these attentional components are integrated by examining the attentional modulation of functional connectivity between visual areas with different selectivity. Specifically, we used fMRI to measure temporal correlations between spatially selective regions of early visual cortex and category-selective regions in ventral temporal cortex while participants performed a task that benefitted from both spatial and categorical attention. We found that categorical attention modulated the connectivity of category-selective areas, but only with retinotopic areas that coded for the spatially attended location. Similarly, spatial attention modulated the connectivity of retinotopic areas only with the areas coding for the attended category. This pattern of results suggests that attentional modulation of connectivity is driven both by spatial selection and featural biases. Combined with exploratory analyses of frontoparietal areas that track these changes in connectivity among visual areas, this study begins to shed light on how different components of attention are integrated in support of more complex behavioral goals.


Sign in / Sign up

Export Citation Format

Share Document