scholarly journals The Formation of Topographic Maps That Maximize the Average Mutual Information of the Output Responses to Noiseless Input Signals

1997 ◽  
Vol 9 (3) ◽  
pp. 595-606 ◽  
Author(s):  
Marc M. Van Hulle

This article introduces an extremely simple and local learning rule for to pographic map formation. The rule, called the maximum entropy learning rule (MER), maximizes the unconditional entropy of the map's output for any type of input distribution. The aim of this article is to show that MER is a viable strategy for building topographic maps that maximize the average mutual information of the output responses to noiseless input signals when only input noise and noise-added input signals are available.

2000 ◽  
Author(s):  
Paul B. Deignan ◽  
Peter H. Meckl ◽  
Matthew A. Franchek ◽  
Salim A. Jaliwala ◽  
George G. Zhu

Abstract A methodology for the intelligent, model-independent selection of an appropriate set of input signals for the system identification of an unknown process is demonstrated. In modeling this process, it is shown that the terms of a simple nonlinear polynomial model may also be determined through the analysis of the average mutual information between inputs and the output. Average mutual information can be thought of as a nonlinear correlation coefficient and can be calculated from input/output data alone. The methodology described here is especially applicable to the development of virtual sensors.


1997 ◽  
Vol 9 (8) ◽  
pp. 1661-1665 ◽  
Author(s):  
Ralph Linsker

This note presents a local learning rule that enables a network to maximize the mutual information between input and output vectors. The network's output units may be nonlinear, and the distribution of input vectors is arbitrary. The local algorithm also serves to compute the inverse C−1 of an arbitrary square connection weight matrix.


1989 ◽  
Vol 1 (3) ◽  
pp. 402-411 ◽  
Author(s):  
Ralph Linsker

A learning rule that performs gradient ascent in the average mutual information between input and an output signal is derived for a system having feedforward and lateral interactions. Several processes emerge as components of this learning rule: Hebb-like modification, and cooperation and competition among processing nodes. Topographic map formation is demonstrated using the learning rule. An analytic expression relating the average mutual information to the response properties of nodes and their geometric arrangement is derived in certain cases. This yields a relation between the local map magnification factor and the probability distribution in the input space. The results provide new links between unsupervised learning and information-theoretic optimization in a system whose properties are biologically motivated.


Author(s):  
Nguyen N. Tran ◽  
Ha X. Nguyen

A capacity analysis for generally correlated wireless multi-hop multi-input multi-output (MIMO) channels is presented in this paper. The channel at each hop is spatially correlated, the source symbols are mutually correlated, and the additive Gaussian noises are colored. First, by invoking Karush-Kuhn-Tucker condition for the optimality of convex programming, we derive the optimal source symbol covariance for the maximum mutual information between the channel input and the channel output when having the full knowledge of channel at the transmitter. Secondly, we formulate the average mutual information maximization problem when having only the channel statistics at the transmitter. Since this problem is almost impossible to be solved analytically, the numerical interior-point-method is employed to obtain the optimal solution. Furthermore, to reduce the computational complexity, an asymptotic closed-form solution is derived by maximizing an upper bound of the objective function. Simulation results show that the average mutual information obtained by the asymptotic design is very closed to that obtained by the optimal design, while saving a huge computational complexity.


2019 ◽  
Author(s):  
Michael E. Rule ◽  
Adrianna R. Loback ◽  
Dhruva V. Raman ◽  
Laura Driscoll ◽  
Christopher D. Harvey ◽  
...  

AbstractOver days and weeks, neural activity representing an animal’s position and movement in sensorimotor cortex has been found to continually reconfigure or ‘drift’ during repeated trials of learned tasks, with no obvious change in behavior. This challenges classical theories which assume stable engrams underlie stable behavior. However, it is not known whether this drift occurs systematically, allowing downstream circuits to extract consistent information. We show that drift is systematically constrained far above chance, facilitating a linear weighted readout of behavioural variables. However, a significant component of drift continually degrades a fixed readout, implying that drift is not confined to a null coding space. We calculate the amount of plasticity required to compensate drift independently of any learning rule, and find that this is within physiologically achievable bounds. We demonstrate that a simple, biologically plausible local learning rule can achieve these bounds, accurately decoding behavior over many days.


Sign in / Sign up

Export Citation Format

Share Document