scholarly journals Scaling of sensory information in large neural populations shows signatures of information-limiting correlations

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
MohammadMehdi Kafashan ◽  
Anna W. Jaffe ◽  
Selmaan N. Chettih ◽  
Ramon Nogueira ◽  
Iñigo Arandia-Romero ◽  
...  

AbstractHow is information distributed across large neuronal populations within a given brain area? Information may be distributed roughly evenly across neuronal populations, so that total information scales linearly with the number of recorded neurons. Alternatively, the neural code might be highly redundant, meaning that total information saturates. Here we investigate how sensory information about the direction of a moving visual stimulus is distributed across hundreds of simultaneously recorded neurons in mouse primary visual cortex. We show that information scales sublinearly due to correlated noise in these populations. We compartmentalized noise correlations into information-limiting and nonlimiting components, then extrapolate to predict how information grows with even larger neural populations. We predict that tens of thousands of neurons encode 95% of the information about visual stimulus direction, much less than the number of neurons in primary visual cortex. These findings suggest that the brain uses a widely distributed, but nonetheless redundant code that supports recovering most sensory information from smaller subpopulations.

Author(s):  
MohammadMehdi Kafashan ◽  
Anna Jaffe ◽  
Selmaan N. Chettih ◽  
Ramon Nogueira ◽  
Iñigo Arandia-Romero ◽  
...  

AbstractHow is information distributed across large neuronal populations within a given brain area? One possibility is that information is distributed roughly evenly across neurons, so that total information scales linearly with the number of recorded neurons. Alternatively, the neural code might be highly redundant, meaning that total information saturates. Here we investigated how information about the direction of a moving visual stimulus is distributed across hundreds of simultaneously recorded neurons in mouse primary visual cortex (V1). We found that information scales sublinearly, due to the presence of correlated noise in these populations. Using recent theoretical advances, we compartmentalized noise correlations into information-limiting and nonlimiting components, and then extrapolated to predict how information grows when neural populations are even larger. We predict that tens of thousands of neurons are required to encode 95% of the information about visual stimulus direction, a number much smaller than the number of neurons in V1. Overall, these findings suggest that the brain uses a widely distributed, but nonetheless redundant code that supports recovering most information from smaller subpopulations.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Loreen Hertäg ◽  
Henning Sprekeler

Sensory systems constantly compare external sensory information with internally generated predictions. While neural hallmarks of prediction errors have been found throughout the brain, the circuit-level mechanisms that underlie their computation are still largely unknown. Here, we show that a well-orchestrated interplay of three interneuron types shapes the development and refinement of negative prediction-error neurons in a computational model of mouse primary visual cortex. By balancing excitation and inhibition in multiple pathways, experience-dependent inhibitory plasticity can generate different variants of prediction-error circuits, which can be distinguished by simulated optogenetic experiments. The experience-dependence of the model circuit is consistent with that of negative prediction-error circuits in layer 2/3 of mouse primary visual cortex. Our model makes a range of testable predictions that may shed light on the circuitry underlying the neural computation of prediction errors.


Author(s):  
Loreen Hertäg ◽  
Henning Sprekeler

AbstractSensory systems constantly compare external sensory information with internally generated predictions. While neural hallmarks of prediction errors have been found throughout the brain, the circuit-level mechanisms that underlie their computation are still largely unknown. Here, we show that a well-orchestrated interplay of three interneuron types shapes the development and refinement of negative prediction-error neurons in a computational model of mouse primary visual cortex. By balancing excitation and inhibition in multiple pathways, experience-dependent inhibitory plasticity can generate different variants of prediction-error circuits, which can be distinguished by simulated optogenetic experiments. The experience-dependence of the model circuit is consistent with that of negative prediction-error circuits in layer 2/3 of mouse primary visual cortex. Our model makes a range of testable predictions that may shed light on the circuitry underlying the neural computation of prediction errors.


2015 ◽  
Vol 113 (9) ◽  
pp. 3159-3171 ◽  
Author(s):  
Caroline D. B. Luft ◽  
Alan Meeson ◽  
Andrew E. Welchman ◽  
Zoe Kourtzi

Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our ability to predict sensory events. We tested the participants' ability to predict the orientation of a test stimulus after exposure to sequences of leftward- or rightward-oriented gratings. Using fMRI decoding, we identified brain patterns related to the observers' visual predictions rather than stimulus-driven activity. Decoding of predicted orientations following structured sequences was enhanced after training, while decoding of cued orientations following exposure to random sequences did not change. These predictive representations appear to be driven by the same large-scale neural populations that encode actual stimulus orientation and to be specific to the learned sequence structure. Thus our findings provide evidence that learning temporal structures supports our ability to predict future events by reactivating selective sensory representations as early as in primary visual cortex.


2017 ◽  
Vol 372 (1715) ◽  
pp. 20160504 ◽  
Author(s):  
Megumi Kaneko ◽  
Michael P. Stryker

Mechanisms thought of as homeostatic must exist to maintain neuronal activity in the brain within the dynamic range in which neurons can signal. Several distinct mechanisms have been demonstrated experimentally. Three mechanisms that act to restore levels of activity in the primary visual cortex of mice after occlusion and restoration of vision in one eye, which give rise to the phenomenon of ocular dominance plasticity, are discussed. The existence of different mechanisms raises the issue of how these mechanisms operate together to converge on the same set points of activity. This article is part of the themed issue ‘Integrating Hebbian and homeostatic plasticity’.


2018 ◽  
Author(s):  
Andreea Lazar ◽  
Chris Lewis ◽  
Pascal Fries ◽  
Wolf Singer ◽  
Danko Nikolić

SummarySensory exposure alters the response properties of individual neurons in primary sensory cortices. However, it remains unclear how these changes affect stimulus encoding by populations of sensory cells. Here, recording from populations of neurons in cat primary visual cortex, we demonstrate that visual exposure enhances stimulus encoding and discrimination. We find that repeated presentation of brief, high-contrast shapes results in a stereotyped, biphasic population response consisting of a short-latency transient, followed by a late and extended period of reverberatory activity. Visual exposure selectively improves the stimulus specificity of the reverberatory activity, by increasing the magnitude and decreasing the trial-to-trial variability of the neuronal response. Critically, this improved stimulus encoding is distributed across the population and depends on precise temporal coordination. Our findings provide evidence for the existence of an exposure-driven optimization process that enhances the encoding power of neuronal populations in early visual cortex, thus potentially benefiting simple readouts at higher stages of visual processing.


2019 ◽  
Vol 121 (6) ◽  
pp. 2202-2214 ◽  
Author(s):  
John P. McClure ◽  
Pierre-Olivier Polack

Multimodal sensory integration facilitates the generation of a unified and coherent perception of the environment. It is now well established that unimodal sensory perceptions, such as vision, are improved in multisensory contexts. Whereas multimodal integration is primarily performed by dedicated multisensory brain regions such as the association cortices or the superior colliculus, recent studies have shown that multisensory interactions also occur in primary sensory cortices. In particular, sounds were shown to modulate the responses of neurons located in layers 2/3 (L2/3) of the mouse primary visual cortex (V1). Yet, the net effect of sound modulation at the V1 population level remained unclear. In the present study, we performed two-photon calcium imaging in awake mice to compare the representation of the orientation and the direction of drifting gratings by V1 L2/3 neurons in unimodal (visual only) or multimodal (audiovisual) conditions. We found that sound modulation depended on the tuning properties (orientation and direction selectivity) and response amplitudes of V1 L2/3 neurons. Sounds potentiated the responses of neurons that were highly tuned to the cue’s orientation and direction but weakly active in the unimodal context, following the principle of inverse effectiveness of multimodal integration. Moreover, sound suppressed the responses of neurons untuned for the orientation and/or the direction of the visual cue. Altogether, sound modulation improved the representation of the orientation and direction of the visual stimulus in V1 L2/3. Namely, visual stimuli presented with auditory stimuli recruited a neuronal population better tuned to the visual stimulus orientation and direction than when presented alone. NEW & NOTEWORTHY The primary visual cortex (V1) receives direct inputs from the primary auditory cortex. Yet, the impact of sounds on visual processing in V1 remains controverted. We show that the modulation by pure tones of V1 visual responses depends on the orientation selectivity, direction selectivity, and response amplitudes of V1 neurons. Hence, audiovisual stimuli recruit a population of V1 neurons better tuned to the orientation and direction of the visual stimulus than unimodal visual stimuli.


2005 ◽  
Vol 94 (2) ◽  
pp. 1336-1345 ◽  
Author(s):  
Bartlett D. Moore ◽  
Henry J. Alitto ◽  
W. Martin Usrey

The activity of neurons in primary visual cortex is influenced by the orientation, contrast, and temporal frequency of a visual stimulus. This raises the question of how these stimulus properties interact to shape neuronal responses. While past studies have shown that the bandwidth of orientation tuning is invariant to stimulus contrast, the influence of temporal frequency on orientation-tuning bandwidth is unknown. Here, we investigate the influence of temporal frequency on orientation tuning and direction selectivity in area 17 of ferret visual cortex. For both simple cells and complex cells, measures of orientation-tuning bandwidth (half-width at half-maximum response) are ∼20–25° across a wide range of temporal frequencies. Thus cortical neurons display temporal-frequency invariant orientation tuning. In contrast, direction selectivity is typically reduced, and occasionally reverses, at nonpreferred temporal frequencies. These results show that the mechanisms contributing to the generation of orientation tuning and direction selectivity are differentially affected by the temporal frequency of a visual stimulus and support the notion that stability of orientation tuning is an important aspect of visual processing.


2016 ◽  
Vol 23 (5) ◽  
pp. 529-541 ◽  
Author(s):  
Sara Ajina ◽  
Holly Bridge

Damage to the primary visual cortex removes the major input from the eyes to the brain, causing significant visual loss as patients are unable to perceive the side of the world contralateral to the damage. Some patients, however, retain the ability to detect visual information within this blind region; this is known as blindsight. By studying the visual pathways that underlie this residual vision in patients, we can uncover additional aspects of the human visual system that likely contribute to normal visual function but cannot be revealed under physiological conditions. In this review, we discuss the residual abilities and neural activity that have been described in blindsight and the implications of these findings for understanding the intact system.


2015 ◽  
Vol 28 (3-4) ◽  
pp. 331-349 ◽  
Author(s):  
Frederico A. C. Azevedo ◽  
Frederico A. C. Azevedo ◽  
Michael Ortiz-Rios ◽  
Frederico A. C. Azevedo ◽  
Michael Ortiz-Rios ◽  
...  

A biologically relevant event is normally the source of multiple, typically correlated, sensory inputs. To optimize perception of the outer world, our brain combines the independent sensory measurements into a coherent estimate. However, if sensory information is not readily available for every pertinent sense, the brain tries to acquire additional information via covert/overt orienting behaviors or uses internal knowledge to modulate sensory sensitivity based on prior expectations. Cross-modal functional modulation of low-level auditory areas due to visual input has been often described; however, less is known about auditory modulations of primary visual cortex. Here, based on some recent evidence, we propose that an unexpected auditory signal could trigger a reflexive overt orienting response towards its source and concomitantly increase the primary visual cortex sensitivity at the locations where the object is expected to enter the visual field. To this end, we propose that three major functionally specific pathways are employed in parallel. A stream orchestrated by the superior colliculus is responsible for the overt orienting behavior, while direct and indirect (via higher-level areas) projections from A1 to V1 respectively enhance spatiotemporal sensitivity and facilitate object detectability.


Sign in / Sign up

Export Citation Format

Share Document