scholarly journals Approximations of Shannon Mutual Information for Discrete Variables with Applications to Neural Population Coding

Author(s):  
Wentao Huang ◽  
Kechen Zhang

Although Shannon mutual information has been widely used, its effective calculation is often difficult for many practical problems, including those in neural population coding. Asymptotic formulas based on Fisher information sometimes provide accurate approximations to the mutual information but this approach is restricted to continuous variables because the calculation of Fisher information requires derivatives with respect to the encoded variables. In this paper, we consider information-theoretic bounds and approximations of the mutual information based on Kullback--Leibler divergence and Rényi divergence. We propose several information metrics to approximate Shannon mutual information in the context of neural population coding. While our asymptotic formulas all work for discrete variables, one of them has consistent performance and high accuracy regardless of whether the encoded variables are discrete or continuous. We performed numerical simulations and confirmed that our approximation formulas were highly accurate for approximating the mutual information between the stimuli and the responses of a large neural population. These approximation formulas may potentially bring convenience to the applications of information theory to many practical and theoretical problems.

Entropy ◽  
2019 ◽  
Vol 21 (3) ◽  
pp. 243
Author(s):  
Wentao Huang ◽  
Kechen Zhang

Although Shannon mutual information has been widely used, its effective calculation is often difficult for many practical problems, including those in neural population coding. Asymptotic formulas based on Fisher information sometimes provide accurate approximations to the mutual information but this approach is restricted to continuous variables because the calculation of Fisher information requires derivatives with respect to the encoded variables. In this paper, we consider information-theoretic bounds and approximations of the mutual information based on Kullback-Leibler divergence and Rényi divergence. We propose several information metrics to approximate Shannon mutual information in the context of neural population coding. While our asymptotic formulas all work for discrete variables, one of them has consistent performance and high accuracy regardless of whether the encoded variables are discrete or continuous. We performed numerical simulations and confirmed that our approximation formulas were highly accurate for approximating the mutual information between the stimuli and the responses of a large neural population. These approximation formulas may potentially bring convenience to the applications of information theory to many practical and theoretical problems.


Author(s):  
Wentao Huang ◽  
Kechen Zhang

Information theory is widely used in various disciplines, and effective calculation of Shannon mutual information is typically not an easy task for many practical applications, including problems of neural population coding in computational and theoretical neuroscience. Asymptotic formulas based on Fisher information may provide accurate approximations to mutual information but this approach is restricted to continuous variables because the calculation requires derivatives with respect to the encoded variables. In this paper, we consider information-theoretic bounds and approximations based on Kullback-Leibler divergence and Rényi divergence. We propose several information metrics to approximate Shannon mutual information in the context of neural population coding, and these asymptotic formulas hold true for discrete variables as there is no requirement for differentiability. In particular, one of our approximation formulas has consistent performance and good accuracy regardless of whether the encoded variables are discrete or continuous. We performed numerical simulations and confirmed that our approximation formulas were highly accurate for approximating mutual information between the discrete variables or stimuli and the responses of a large neural population. These approximation formulas may potentially bring convenience to the applications of information theory to many practical and theoretical problems.


2018 ◽  
Vol 30 (4) ◽  
pp. 885-944 ◽  
Author(s):  
Wentao Huang ◽  
Kechen Zhang

While Shannon's mutual information has widespread applications in many disciplines, for practical applications it is often difficult to calculate its value accurately for high-dimensional variables because of the curse of dimensionality. This article focuses on effective approximation methods for evaluating mutual information in the context of neural population coding. For large but finite neural populations, we derive several information-theoretic asymptotic bounds and approximation formulas that remain valid in high-dimensional spaces. We prove that optimizing the population density distribution based on these approximation formulas is a convex optimization problem that allows efficient numerical solutions. Numerical simulation results confirmed that our asymptotic formulas were highly accurate for approximating mutual information for large neural populations. In special cases, the approximation formulas are exactly equal to the true mutual information. We also discuss techniques of variable transformation and dimensionality reduction to facilitate computation of the approximations.


2016 ◽  
Vol 28 (2) ◽  
pp. 305-326 ◽  
Author(s):  
Xue-Xin Wei ◽  
Alan A. Stocker

Fisher information is generally believed to represent a lower bound on mutual information (Brunel & Nadal, 1998 ), a result that is frequently used in the assessment of neural coding efficiency. However, we demonstrate that the relation between these two quantities is more nuanced than previously thought. For example, we find that in the small noise regime, Fisher information actually provides an upper bound on mutual information. Generally our results show that it is more appropriate to consider Fisher information as an approximation rather than a bound on mutual information. We analytically derive the correspondence between the two quantities and the conditions under which the approximation is good. Our results have implications for neural coding theories and the link between neural population coding and psychophysically measurable behavior. Specifically, they allow us to formulate the efficient coding problem of maximizing mutual information between a stimulus variable and the response of a neural population in terms of Fisher information. We derive a signature of efficient coding expressed as the correspondence between the population Fisher information and the distribution of the stimulus variable. The signature is more general than previously proposed solutions that rely on specific assumptions about the neural tuning characteristics. We demonstrate that it can explain measured tuning characteristics of cortical neural populations that do not agree with previous models of efficient coding.


2005 ◽  
Vol 17 (4) ◽  
pp. 839-858 ◽  
Author(s):  
Shun-ichi Amari ◽  
Hiroyuki Nakahara

Fisher information has been used to analyze the accuracy of neural population coding. This works well when the Fisher information does not degenerate, but when two stimuli are presented to a population of neurons, a singular structure emerges by their mutual interactions. In this case, the Fisher information matrix degenerates, and the regularity condition ensuring the Cramér-Rao paradigm of statistics is violated. An animal shows pathological behavior in such a situation. We present a novel method of statistical analysis to understand information in population coding in which algebraic singularity plays a major role. The method elucidates the nature of the pathological case by calculating the Fisher information. We then suggest that synchronous firing can resolve singularity and show a method of analyzing the binding problem in terms of the Fisher information. Our method integrates a variety of disciplines in population coding, such as nonregular statistics, Bayesian statistics, singularity in algebraic geometry, and synchronous firing, under the theme of Fisher information.


2011 ◽  
Vol 121-126 ◽  
pp. 4203-4207 ◽  
Author(s):  
Lin Huo ◽  
Chuan Lv ◽  
Si Miao Fei ◽  
Dong Zhou

As most Mutual Information method is limited to the correlation analysis between discrete variables in majority and tendency of choosing the characteristic variables with multi-values so far, in this paper we propose a new approach based on Mutual Information to measure the correlation of discrete variables and continuous variables. Then we take the fire control system of aircraft for example to calculate the correlation between fault types and monitor data indexes, and finally find the fault symptom classes.


1998 ◽  
Vol 10 (7) ◽  
pp. 1731-1757 ◽  
Author(s):  
Nicolas Brunel ◽  
Jean-Pierre Nadal

In the context of parameter estimation and model selection, it is only quite recently that a direct link between the Fisher information and information-theoretic quantities has been exhibited. We give an interpretation of this link within the standard framework of information theory. We show that in the context of population coding, the mutual information between the activity of a large array of neurons and a stimulus to which the neurons are tuned is naturally related to the Fisher information. In the light of this result, we consider the optimization of the tuning curves parameters in the case of neurons responding to a stimulus represented by an angular variable.


Author(s):  
Leonard P. Pomrehn ◽  
Panos Y. Papalambros

Abstract The use of discrete variables in optimal design models offers the opportunity to deal rigorously with an expanded variety of design situations, as opposed to using only continuous variables. However, complexity and solution difficulty increase dramatically and model formulation becomes very important. A particular problem arising from the design of a gear train employing four spur gear pairs is introduced and formulated in several different ways. An interesting aspect of the problem is its exhibition of three different types of discreteness. The problem could serve as a test for a variety of optimization or artificial intellegence techniques. The best known solution is included in this article, while its derivation is given in a sequel article.


Sign in / Sign up

Export Citation Format

Share Document