scholarly journals Unique spatial integration in mouse primary visual cortex and higher visual areas

2019 ◽  
Author(s):  
Kevin A. Murgas ◽  
Ashley M. Wilson ◽  
Valerie Michael ◽  
Lindsey L. Glickfeld

AbstractNeurons in the visual system integrate over a wide range of spatial scales. This diversity is thought to enable both local and global computations. To understand how spatial information is encoded across the mouse visual system, we use two-photon imaging to measure receptive fields in primary visual cortex (V1) and three downstream higher visual areas (HVAs): LM (lateromedial), AL (anterolateral) and PM (posteromedial). We find significantly larger receptive field sizes and less surround suppression in PM than in V1 or the other HVAs. Unlike other visual features studied in this system, specialization of spatial integration in PM cannot be explained by specific projections from V1 to the HVAs. Instead, our data suggests that distinct connectivity within PM may support the area’s unique ability to encode global features of the visual scene, whereas V1, LM and AL may be more specialized for processing local features.

2020 ◽  
Vol 124 (1) ◽  
pp. 245-258 ◽  
Author(s):  
Miaomiao Jin ◽  
Lindsey L. Glickfeld

Rapid adaptation dynamically alters sensory signals to account for recent experience. To understand how adaptation affects sensory processing and perception, we must determine how it impacts the diverse set of cortical and subcortical areas along the hierarchy of the mouse visual system. We find that rapid adaptation strongly impacts neurons in primary visual cortex, the higher visual areas, and the colliculus, consistent with its profound effects on behavior.


2020 ◽  
Vol 40 (9) ◽  
pp. 1862-1873 ◽  
Author(s):  
Kevin A. Murgas ◽  
Ashley M. Wilson ◽  
Valerie Michael ◽  
Lindsey L. Glickfeld

1998 ◽  
Vol 78 (2) ◽  
pp. 467-485 ◽  
Author(s):  
CHARLES D. GILBERT

Gilbert, Charles D. Adult Cortical Dynamics. Physiol. Rev. 78: 467–485, 1998. — There are many influences on our perception of local features. What we see is not strictly a reflection of the physical characteristics of a scene but instead is highly dependent on the processes by which our brain attempts to interpret the scene. As a result, our percepts are shaped by the context within which local features are presented, by our previous visual experiences, operating over a wide range of time scales, and by our expectation of what is before us. The substrate for these influences is likely to be found in the lateral interactions operating within individual areas of the cerebral cortex and in the feedback from higher to lower order cortical areas. Even at early stages in the visual pathway, cells are far more flexible in their functional properties than previously thought. It had long been assumed that cells in primary visual cortex had fixed properties, passing along the product of a stereotyped operation to the next stage in the visual pathway. Any plasticity dependent on visual experience was thought to be restricted to a period early in the life of the animal, the critical period. Furthermore, the assembly of contours and surfaces into unified percepts was assumed to take place at high levels in the visual pathway, whereas the receptive fields of cells in primary visual cortex represented very small windows on the visual scene. These concepts of spatial integration and plasticity have been radically modified in the past few years. The emerging view is that even at the earliest stages in the cortical processing of visual information, cells are highly mutable in their functional properties and are capable of integrating information over a much larger part of visual space than originally believed.


2019 ◽  
Author(s):  
Guido Maiello ◽  
Manuela Chessa ◽  
Peter J. Bex ◽  
Fabio Solari

AbstractThe human visual system is foveated: we can see fine spatial details in central vision, whereas resolution is poor in our peripheral visual field, and this loss of resolution follows an approximately logarithmic decrease. Additionally, our brain organizes visual input in polar coordinates. Therefore, the image projection occurring between retina and primary visual cortex can be mathematically described by the log-polar transform. Here, we test and model how this space-variant visual processing affects how we process binocular disparity, a key component of human depth perception. We observe that the fovea preferentially processes disparities at fine spatial scales, whereas the visual periphery is tuned for coarse spatial scales, in line with the naturally occurring distributions of depths and disparities in the real-world. We further show that the visual field integrates disparity information across the visual field, in a near-optimal fashion. We develop a foveated, log-polar model that mimics the processing of depth information in primary visual cortex and that can process disparity directly in the cortical domain representation. This model takes real images as input and recreates the observed topography of disparity sensitivity in man. Our findings support the notion that our foveated, binocular visual system has been moulded by the statistics of our visual environment.Author summaryWe investigate how humans perceive depth from binocular disparity at different spatial scales and across different regions of the visual field. We show that small changes in disparity-defined depth are detected best in central vision, whereas peripheral vision best captures the coarser structure of the environment. We also demonstrate that depth information extracted from different regions of the visual field is combined into a unified depth percept. We then construct an image-computable model of disparity processing that takes into account how our brain organizes the visual input at our retinae. The model operates directly in cortical image space, and neatly accounts for human depth perception across the visual field.


2019 ◽  
Author(s):  
E. Mika Diamanti ◽  
Charu Bai Reddy ◽  
Sylvia Schröder ◽  
Tomaso Muzzu ◽  
Kenneth D. Harris ◽  
...  

During navigation, the visual responses of neurons in primary visual cortex (V1) are modulated by the animal’s spatial position. Here we show that this spatial modulation is similarly present across multiple higher visual areas but largely absent in the main thalamic pathway into V1. Similar to hippocampus, spatial modulation in visual cortex strengthens with experience and requires engagement in active behavior. Active navigation in a familiar environment, therefore, determines spatial modulation of visual signals starting in the cortex.


2021 ◽  
Author(s):  
Yulia Revina ◽  
Lucy S Petro ◽  
Cristina B Denk-Florea ◽  
Isa S Rao ◽  
Lars Muckli

The majority of synaptic inputs to the primary visual cortex (V1) are non-feedforward, instead originating from local and anatomical feedback connections. Animal electrophysiology experiments show that feedback signals originating from higher visual areas with larger receptive fields modulate the surround receptive fields of V1 neurons. Theories of cortical processing propose various roles for feedback and feedforward processing, but systematically investigating their independent contributions to cortical processing is challenging because feedback and feedforward processes coexist even in single neurons. Capitalising on the larger receptive fields of higher visual areas compared to primary visual cortex (V1), we used an occlusion paradigm that isolates top-down influences from feedforward processing. We utilised functional magnetic resonance imaging (fMRI) and multi-voxel pattern analysis methods in humans viewing natural scene images. We parametrically measured how the availability of contextual information determines the presence of detectable feedback information in non-stimulated V1, and how feedback information interacts with feedforward processing. We show that increasing the visibility of the contextual surround increases scene-specific feedback information, and that this contextual feedback enhances feedforward information. Our findings are in line with theories that cortical feedback signals transmit internal models of predicted inputs.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
E Mika Diamanti ◽  
Charu Bai Reddy ◽  
Sylvia Schröder ◽  
Tomaso Muzzu ◽  
Kenneth D Harris ◽  
...  

During navigation, the visual responses of neurons in mouse primary visual cortex (V1) are modulated by the animal’s spatial position. Here we show that this spatial modulation is similarly present across multiple higher visual areas but negligible in the main thalamic pathway into V1. Similar to hippocampus, spatial modulation in visual cortex strengthens with experience and with active behavior. Active navigation in a familiar environment, therefore, enhances the spatial modulation of visual signals starting in the cortex.


2020 ◽  
Author(s):  
Rune N. Rasmussen ◽  
Akihiro Matsumoto ◽  
Simon Arvin ◽  
Keisuke Yonehara

AbstractLocomotion creates various patterns of optic flow on the retina, which provide the observer with information about their movement relative to the environment. However, it is unclear how these optic flow patterns are encoded by the cortex. Here we use two-photon calcium imaging in awake mice to systematically map monocular and binocular responses to horizontal motion in four areas of the visual cortex. We find that neurons selective to translational or rotational optic flow are abundant in higher visual areas, whereas neurons suppressed by binocular motion are more common in the primary visual cortex. Disruption of retinal direction selectivity in Frmd7 mutant mice reduces the number of translation-selective neurons in the primary visual cortex, and translation- and rotation-selective neurons as well as binocular direction-selective neurons in the rostrolateral and anterior visual cortex, blurring the functional distinction between primary and higher visual areas. Thus, optic flow representations in specific areas of the visual cortex rely on binocular integration of motion information from the retina.


2008 ◽  
Vol 20 (7) ◽  
pp. 1847-1872 ◽  
Author(s):  
Mark C. W. van Rossum ◽  
Matthijs A. A. van der Meer ◽  
Dengke Xiao ◽  
Mike W. Oram

Neurons in the visual cortex receive a large amount of input from recurrent connections, yet the functional role of these connections remains unclear. Here we explore networks with strong recurrence in a computational model and show that short-term depression of the synapses in the recurrent loops implements an adaptive filter. This allows the visual system to respond reliably to deteriorated stimuli yet quickly to high-quality stimuli. For low-contrast stimuli, the model predicts long response latencies, whereas latencies are short for high-contrast stimuli. This is consistent with physiological data showing that in higher visual areas, latencies can increase more than 100 ms at low contrast compared to high contrast. Moreover, when presented with briefly flashed stimuli, the model predicts stereotypical responses that outlast the stimulus, again consistent with physiological findings. The adaptive properties of the model suggest that the abundant recurrent connections found in visual cortex serve to adapt the network's time constant in accordance with the stimulus and normalizes neuronal signals such that processing is as fast as possible while maintaining reliability.


Sign in / Sign up

Export Citation Format

Share Document