scholarly journals Limitations to Estimating Mutual Information in Large Neural Populations

Entropy ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. 490
Author(s):  
Jan Mölter ◽  
Geoffrey J. Goodhill

Information theory provides a powerful framework to analyse the representation of sensory stimuli in neural population activity. However, estimating the quantities involved such as entropy and mutual information from finite samples is notoriously hard and any direct estimate is known to be heavily biased. This is especially true when considering large neural populations. We study a simple model of sensory processing and show through a combinatorial argument that, with high probability, for large neural populations any finite number of samples of neural activity in response to a set of stimuli is mutually distinct. As a consequence, the mutual information when estimated directly from empirical histograms will be equal to the stimulus entropy. Importantly, this is the case irrespective of the precise relation between stimulus and neural activity and corresponds to a maximal bias. This argument is general and applies to any application of information theory, where the state space is large and one relies on empirical histograms. Overall, this work highlights the need for alternative approaches for an information theoretic analysis when dealing with large neural populations.

2018 ◽  
Vol 30 (4) ◽  
pp. 885-944 ◽  
Author(s):  
Wentao Huang ◽  
Kechen Zhang

While Shannon's mutual information has widespread applications in many disciplines, for practical applications it is often difficult to calculate its value accurately for high-dimensional variables because of the curse of dimensionality. This article focuses on effective approximation methods for evaluating mutual information in the context of neural population coding. For large but finite neural populations, we derive several information-theoretic asymptotic bounds and approximation formulas that remain valid in high-dimensional spaces. We prove that optimizing the population density distribution based on these approximation formulas is a convex optimization problem that allows efficient numerical solutions. Numerical simulation results confirmed that our asymptotic formulas were highly accurate for approximating mutual information for large neural populations. In special cases, the approximation formulas are exactly equal to the true mutual information. We also discuss techniques of variable transformation and dimensionality reduction to facilitate computation of the approximations.


2015 ◽  
Vol 5 (1) ◽  
Author(s):  
Hideaki Shimazaki ◽  
Kolia Sadeghi ◽  
Tomoe Ishikawa ◽  
Yuji Ikegaya ◽  
Taro Toyoizumi

Abstract Activity patterns of neural population are constrained by underlying biological mechanisms. These patterns are characterized not only by individual activity rates and pairwise correlations but also by statistical dependencies among groups of neurons larger than two, known as higher-order interactions (HOIs). While HOIs are ubiquitous in neural activity, primary characteristics of HOIs remain unknown. Here, we report that simultaneous silence (SS) of neurons concisely summarizes neural HOIs. Spontaneously active neurons in cultured hippocampal slices express SS that is more frequent than predicted by their individual activity rates and pairwise correlations. The SS explains structured HOIs seen in the data, namely, alternating signs at successive interaction orders. Inhibitory neurons are necessary to maintain significant SS. The structured HOIs predicted by SS were observed in a simple neural population model characterized by spiking nonlinearity and correlated input. These results suggest that SS is a ubiquitous feature of HOIs that constrain neural activity patterns and can influence information processing.


2018 ◽  
Author(s):  
Adrianna R. Loback ◽  
Michael J. Berry

When correlations within a neural population are strong enough, neural activity in early visual areas is organized into a discrete set of clusters. Here, we show that a simple, biologically plausible circuit can learn and then readout in real-time the identity of experimentally measured clusters of retinal ganglion cell population activity. After learning, individual readout neurons develop cluster tuning, meaning that they respond strongly to any neural activity pattern in one cluster and weakly to all other inputs. Different readout neurons specialize for different clusters, and all input clusters can be learned, as long as the number of readout units is mildly larger than the number of input clusters. We argue that this operation can be repeated as signals flow up the cortical hierarchy.


2019 ◽  
Vol 31 (6) ◽  
pp. 1015-1047 ◽  
Author(s):  
John A. Berkowitz ◽  
Tatyana O. Sharpee

Quantifying mutual information between inputs and outputs of a large neural circuit is an important open problem in both machine learning and neuroscience. However, evaluation of the mutual information is known to be generally intractable for large systems due to the exponential growth in the number of terms that need to be evaluated. Here we show how information contained in the responses of large neural populations can be effectively computed provided the input-output functions of individual neurons can be measured and approximated by a logistic function applied to a potentially nonlinear function of the stimulus. Neural responses in this model can remain sensitive to multiple stimulus components. We show that the mutual information in this model can be effectively approximated as a sum of lower-dimensional conditional mutual information terms. The approximations become exact in the limit of large neural populations and for certain conditions on the distribution of receptive fields across the neural population. We empirically find that these approximations continue to work well even when the conditions on the receptive field distributions are not fulfilled. The computing cost for the proposed methods grows linearly in the dimension of the input and compares favorably with other approximations.


2012 ◽  
Vol 24 (7) ◽  
pp. 1740-1780 ◽  
Author(s):  
Stuart Yarrow ◽  
Edward Challis ◽  
Peggy Seriès

The precision of the neural code is commonly investigated using two families of statistical measures: Shannon mutual information and derived quantities when investigating very small populations of neurons and Fisher information when studying large populations. These statistical tools are no longer the preserve of theorists and are being applied by experimental research groups in the analysis of empirical data. Although the relationship between information-theoretic and Fisher-based measures in the limit of infinite populations is relatively well understood, how these measures compare in finite-size populations has not yet been systematically explored. We aim to close this gap. We are particularly interested in understanding which stimuli are best encoded by a given neuron within a population and how this depends on the chosen measure. We use a novel Monte Carlo approach to compute a stimulus-specific decomposition of the mutual information (the SSI) for populations of up to 256 neurons and show that Fisher information can be used to accurately estimate both mutual information and SSI for populations of the order of 100 neurons, even in the presence of biologically realistic variability, noise correlations, and experimentally relevant integration times. According to both measures, the stimuli that are best encoded are those falling at the flanks of the neuron's tuning curve. In populations of fewer than around 50 neurons, however, Fisher information can be misleading.


2007 ◽  
Vol 19 (5) ◽  
pp. 1295-1312 ◽  
Author(s):  
Santiago Jaramillo ◽  
Barak A. Pearlmutter

Neuronal activity in response to a fixed stimulus has been shown to change as a function of attentional state, implying that the neural code also changes with attention. We propose an information-theoretic account of such modulation: that the nervous system adapts to optimally encode sensory stimuli while taking into account the changing relevance of different features. We show using computer simulation that such modulation emerges in a coding system informed about the uneven relevance of the input features. We present a simple feedforward model that learns a covert attention mechanism, given input patterns and coding fidelity requirements. After optimization, the system gains the ability to reorganize its computational resources (and coding strategy) depending on the incoming attentional signal, without the need of multiplicative interaction or explicit gating mechanisms between units. The modulation of activity for different attentional states matches that observed in a variety of selective attention experiments. This model predicts that the shape of the attentional modulation function can be strongly stimulus dependent. The general principle presented here accounts for attentional modulation of neural activity without relying on special-purpose architectural mechanisms dedicated to attention. This principle applies to different attentional goals, and its implications are relevant for all modalities in which attentional phenomena are observed.


2019 ◽  
Vol 116 (30) ◽  
pp. 15210-15215 ◽  
Author(s):  
Emily R. Oby ◽  
Matthew D. Golub ◽  
Jay A. Hennig ◽  
Alan D. Degenhart ◽  
Elizabeth C. Tyler-Kabara ◽  
...  

Learning has been associated with changes in the brain at every level of organization. However, it remains difficult to establish a causal link between specific changes in the brain and new behavioral abilities. We establish that new neural activity patterns emerge with learning. We demonstrate that these new neural activity patterns cause the new behavior. Thus, the formation of new patterns of neural population activity can underlie the learning of new skills.


1998 ◽  
Vol 10 (7) ◽  
pp. 1731-1757 ◽  
Author(s):  
Nicolas Brunel ◽  
Jean-Pierre Nadal

In the context of parameter estimation and model selection, it is only quite recently that a direct link between the Fisher information and information-theoretic quantities has been exhibited. We give an interpretation of this link within the standard framework of information theory. We show that in the context of population coding, the mutual information between the activity of a large array of neurons and a stimulus to which the neurons are tuned is naturally related to the Fisher information. In the light of this result, we consider the optimization of the tuning curves parameters in the case of neurons responding to a stimulus represented by an angular variable.


Author(s):  
Martina Valente ◽  
Giuseppe Pica ◽  
Caroline A. Runyan ◽  
Ari S. Morcos ◽  
Christopher D. Harvey ◽  
...  

The spatiotemporal structure of activity in populations of neurons is critical for accurate perception and behavior. Experimental and theoretical studies have focused on “noise” correlations – trial-to-trial covariations in neural activity for a given stimulus – as a key feature of population activity structure. Much work has shown that these correlations limit the stimulus information encoded by a population of neurons, leading to the widely-held prediction that correlations are detrimental for perceptual discrimination behaviors. However, this prediction relies on an untested assumption: that the neural mechanisms that read out sensory information to inform behavior depend only on a population’s total stimulus information independently of how correlations constrain this information across neurons or time. Here we make the critical advance of simultaneously studying how correlations affect both the encoding and the readout of sensory information. We analyzed calcium imaging data from mouse posterior parietal cortex during two perceptual discrimination tasks. Correlations limited the ability to encode stimulus information, but (seemingly paradoxically) correlations were higher when mice made correct choices than when they made errors. On a single-trial basis, a mouse’s behavioral choice depended not only on the stimulus information in the activity of the population as a whole, but unexpectedly also on the consistency of information across neurons and time. Because correlations increased information consistency, sensory information was more efficiently converted into a behavioral choice in the presence of correlations. Given this enhanced-by-consistency readout, we estimated that correlations produced a behavioral benefit that compensated or overcame their detrimental information-limiting effects. These results call for a re-evaluation of the role of correlated neural activity, and suggest that correlations in association cortex can benefit task performance even if they decrease sensory information.


2019 ◽  
Author(s):  
Christina T. Echagarruga ◽  
Kyle Gheres ◽  
Patrick J. Drew

AbstractChanges in cortical neural activity are coupled to changes in local arterial diameter and blood flow. However, the neuronal types and the signaling mechanisms that control the basal diameter of cerebral arteries or their evoked dilations are not well understood. Using chronic two-photon microscopy, electrophysiology, chemogenetics, and pharmacology in awake, head-fixed mice, we dissected the cellular mechanisms controlling the basal diameter and evoked dilation in cortical arteries. We found that modulation of overall neural activity up or down caused corresponding increases or decreases in basal arterial diameter. Surprisingly, modulation of pyramidal neuron activity had minimal effects on basal or evoked arterial dilation. Instead, the neurally-mediated component of arterial dilation was largely regulated through nitric oxide released by neuronal nitric oxide synthase (nNOS)-expressing neurons, whose activity was not reflected in electrophysiological measures of population activity. Our results show that cortical hemodynamic signals are not controlled by the average activity of the neural population, but rather the activity of a small ‘oligarchy’ of neurons.


Sign in / Sign up

Export Citation Format

Share Document