Spatiotemporal Coding in the Cortex: Information Flow-Based Learning in Spiking Neural Networks

1999 ◽  
Vol 11 (4) ◽  
pp. 919-934 ◽  
Author(s):  
Gustavo Deco ◽  
Bernd Schürmann

We introduce a learning paradigm for networks of integrate-and-fire spiking neurons that is based on an information-theoretic criterion. This criterion can be viewed as a first principle that demonstrates the experimentally observed fact that cortical neurons display synchronous firing for some stimuli and not for others. The principle can be regarded as the postulation of a nonparametric reconstruction method as optimization criteria for learning the required functional connectivity that justifies and explains synchronous firing for binding of features as a mechanism for spatiotemporal coding. This can be expressed in an information-theoretic way by maximizing the discrimination ability between different sensory inputs in minimal time.

2011 ◽  
Vol 106 (1) ◽  
pp. 361-373 ◽  
Author(s):  
Srdjan Ostojic

Interspike interval (ISI) distributions of cortical neurons exhibit a range of different shapes. Wide ISI distributions are believed to stem from a balance of excitatory and inhibitory inputs that leads to a strongly fluctuating total drive. An important question is whether the full range of experimentally observed ISI distributions can be reproduced by modulating this balance. To address this issue, we investigate the shape of the ISI distributions of spiking neuron models receiving fluctuating inputs. Using analytical tools to describe the ISI distribution of a leaky integrate-and-fire (LIF) neuron, we identify three key features: 1) the ISI distribution displays an exponential decay at long ISIs independently of the strength of the fluctuating input; 2) as the amplitude of the input fluctuations is increased, the ISI distribution evolves progressively between three types, a narrow distribution (suprathreshold input), an exponential with an effective refractory period (subthreshold but suprareset input), and a bursting exponential (subreset input); 3) the shape of the ISI distribution is approximately independent of the mean ISI and determined only by the coefficient of variation. Numerical simulations show that these features are not specific to the LIF model but are also present in the ISI distributions of the exponential integrate-and-fire model and a Hodgkin-Huxley-like model. Moreover, we observe that for a fixed mean and coefficient of variation of ISIs, the full ISI distributions of the three models are nearly identical. We conclude that the ISI distributions of spiking neurons in the presence of fluctuating inputs are well described by gamma distributions.


2006 ◽  
Vol 18 (1) ◽  
pp. 45-59 ◽  
Author(s):  
Naoki Masuda

Firing rates and synchronous firing are often simultaneously relevant signals, and they independently or cooperatively represent external sensory inputs, cognitive events, and environmental situations such as body position. However, how rates and synchrony comodulate and which aspects of inputs are effectively encoded, particularly in the presence of dynamical inputs, are unanswered questions. We examine theoretically how mixed information in dynamic mean input and noise input is represented by dynamic population firing rates and synchrony. In a subthreshold regime, amplitudes of spatially uncorrelated noise are encoded up to a fairly high input frequency, but this requires both rate and synchrony output channels. In a suprathreshold regime, means and common noise amplitudes can be simultaneously and separately encoded by rates and synchrony, respectively, but the input frequency for which this is possible has a lower limit.


2004 ◽  
Vol 92 (2) ◽  
pp. 959-976 ◽  
Author(s):  
Renaud Jolivet ◽  
Timothy J. Lewis ◽  
Wulfram Gerstner

We demonstrate that single-variable integrate-and-fire models can quantitatively capture the dynamics of a physiologically detailed model for fast-spiking cortical neurons. Through a systematic set of approximations, we reduce the conductance-based model to 2 variants of integrate-and-fire models. In the first variant (nonlinear integrate-and-fire model), parameters depend on the instantaneous membrane potential, whereas in the second variant, they depend on the time elapsed since the last spike [Spike Response Model (SRM)]. The direct reduction links features of the simple models to biophysical features of the full conductance-based model. To quantitatively test the predictive power of the SRM and of the nonlinear integrate-and-fire model, we compare spike trains in the simple models to those in the full conductance-based model when the models are subjected to identical randomly fluctuating input. For random current input, the simple models reproduce 70–80 percent of the spikes in the full model (with temporal precision of ±2 ms) over a wide range of firing frequencies. For random conductance injection, up to 73 percent of spikes are coincident. We also present a technique for numerically optimizing parameters in the SRM and the nonlinear integrate-and-fire model based on spike trains in the full conductance-based model. This technique can be used to tune simple models to reproduce spike trains of real neurons.


2000 ◽  
Vol 55 (3-4) ◽  
pp. 405-411
Author(s):  
Hans H. Diebner

Abstract The concept of the time dependent instantaneously occupied phase space volume is applied to multi-component systems. It allows for the investigation of entropy flows betweeen the components of the system and the evaluation of partial entropies assigned to the subsystems. We give numerical examples by means of molecular dynamics simulations of a 100-particle gas. Using a symplectic exactly reversible algorithm, a consistent and reliable evalutation of energy and entropy exchanges as well as the intake of work is achieved. The entropy flow which is related to an information flow is linked to an observational situation. This yields a further indication for the necessity of an intrinsic observer for a better understanding of the physical world. In addition, it indicates the Gödelian structure of cognition in a most serious way because only “first-principle” assumptions are made. Thereby, the paradoxical situation which is created by Jaynes’ concept of an “anthropomorphic entropy” can be resolved by putting the anthropomorphic contents of thermodynamics down to an ontological basis. This is a straightforward extension of Szilard’s and Brillouin’s information theoretical treatment of cognition


2003 ◽  
Vol 89 (6) ◽  
pp. 3279-3293 ◽  
Author(s):  
Xiao-Jing Wang ◽  
Yinghui Liu ◽  
Maria V. Sanchez-Vives ◽  
David A. McCormick

Limiting redundancy in the real-world sensory inputs is of obvious benefit for efficient neural coding, but little is known about how this may be accomplished by biophysical neural mechanisms. One possible cellular mechanism is through adaptation to relatively constant inputs. Recent investigations in primary visual (V1) cortical neurons have demonstrated that adaptation to prolonged changes in stimulus contrast is mediated in part through intrinsic ionic currents, a Ca2+-activated K+ current ( IKCa) and especially a Na+-activated K+ current ( IKNa). The present study was designed to test the hypothesis that the activation of adaptation ionic currents may provide a cellular mechanism for temporal decorrelation in V1. A conductance-based neuron model was simulated, which included an IKCa and an IKNa. We show that the model neuron reproduces the adaptive behavior of V1 neurons in response to high contrast inputs. When the stimulus is stochastic with 1/ f 2 or 1/ f-type temporal correlations, these autocorrelations are greatly reduced in the output spike train of the model neuron. The IKCa is effective at reducing positive temporal correlations at approximately 100-ms time scale, while a slower adaptation mediated by IKNa is effective in reducing temporal correlations over the range of 1–20 s. Intracellular injection of stochastic currents into layer 2/3 and 4 (pyramidal and stellate) neurons in ferret primary visual cortical slices revealed neuronal responses that exhibited temporal decorrelation in similarity with the model. Enhancing the slow afterhyperpolarization resulted in a strengthening of the decorrelation effect. These results demonstrate the intrinsic membrane properties of neocortical neurons provide a mechanism for decorrelation of sensory inputs.


2009 ◽  
Vol 21 (11) ◽  
pp. 3106-3129 ◽  
Author(s):  
Massimilian Giulioni ◽  
Mario Pannunzi ◽  
Davide Badoni ◽  
Vittorio Dante ◽  
Paolo Del Giudice

We describe the implementation and illustrate the learning performance of an analog VLSI network of 32 integrate-and-fire neurons with spike-frequency adaptation and 2016 Hebbian bistable spike-driven stochastic synapses, endowed with a self-regulating plasticity mechanism, which avoids unnecessary synaptic changes. The synaptic matrix can be flexibly configured and provides both recurrent and external connectivity with address-event representation compliant devices. We demonstrate a marked improvement in the efficiency of the network in classifying correlated patterns, owing to the self-regulating mechanism.


1997 ◽  
Vol 9 (2) ◽  
pp. 279-304 ◽  
Author(s):  
Wolfgang Maass

We show that networks of relatively realistic mathematical models for biological neurons in principle can simulate arbitrary feedforward sigmoidal neural nets in a way that has previously not been considered. This new approach is based on temporal coding by single spikes (respectively by the timing of synchronous firing in pools of neurons) rather than on the traditional interpretation of analog variables in terms of firing rates. The resulting new simulation is substantially faster and hence more consistent with experimental results about the maximal speed of information processing in cortical neural systems. As a consequence we can show that networks of noisy spiking neurons are “universal approximators” in the sense that they can approximate with regard to temporal coding any given continuous function of several variables. This result holds for a fairly large class of schemes for coding analog variables by firing times of spiking neurons. This new proposal for the possible organization of computations in networks of spiking neurons systems has some interesting consequences for the type of learning rules that would be needed to explain the self-organization of such networks. Finally, the fast and noise-robust implementation of sigmoidal neural nets by temporal coding points to possible new ways of implementing feedforward and recurrent sigmoidal neural nets with pulse stream VLSI.


1997 ◽  
Vol 9 (5) ◽  
pp. 971-983 ◽  
Author(s):  
Todd W. Troyer ◽  
Kenneth D. Miller

To understand the interspike interval (ISI) variability displayed by visual cortical neurons (Softky & Koch, 1993), it is critical to examine the dynamics of their neuronal integration, as well as the variability in their synaptic input current. Most previous models have focused on the latter factor. We match a simple integrate-and-fire model to the experimentally measured integrative properties of cortical regular spiking cells (McCormick, Connors, Lighthall, & Prince, 1985). After setting RC parameters, the postspike voltage reset is set to match experimental measurements of neuronal gain (obtained from in vitro plots of firing frequency versus injected current). Examination of the resulting model leads to an intuitive picture of neuronal integration that unifies the seemingly contradictory [Formula: see text] and random walk pictures that have previously been proposed. When ISIs are dominated by postspike recovery,[Formula: see text] arguments hold and spiking is regular; after the “memory” of the last spike becomes negligible, spike threshold crossing is caused by input variance around a steady state and spiking is Poisson. In integrate-and-fire neurons matched to cortical cell physiology, steady-state behavior is predominant, and ISIs are highly variable at all physiological firing rates and for a wide range of inhibitory and excitatory inputs.


Sign in / Sign up

Export Citation Format

Share Document