Recurrent Information Optimization with Local, Metaplastic Synaptic Dynamics

2017 ◽  
Vol 29 (9) ◽  
pp. 2528-2552 ◽  
Author(s):  
Sensen Liu ◽  
ShiNung Ching

We consider the problem of optimizing information-theoretic quantities in recurrent networks via synaptic learning. In contrast to feedforward networks, the recurrence presents a key challenge insofar as an optimal learning rule must aggregate the joint distribution of the whole network. This challenge, in particular, makes a local policy (i.e., one that depends on only pairwise interactions) difficult. Here, we report a local metaplastic learning rule that performs approximate optimization by estimating whole-network statistics through the use of several slow, nested dynamical variables. These dynamics provide the rule with both anti-Hebbian and Hebbian components, thus allowing for decorrelating and correlating learning regimes that can occur when either is favorable for optimality. We demonstrate the performance of the synthesized rule in comparison to classical BCM dynamics and use the networks to conduct history-dependent tasks that highlight the advantages of recurrence. Finally, we show the consistency of the resultant learned networks with notions of criticality, including balanced ratios of excitation and inhibition.

1992 ◽  
Vol 4 (5) ◽  
pp. 691-702 ◽  
Author(s):  
Ralph Linsker

A network that develops to maximize the mutual information between its output and the signal portion of its input (which is admixed with noise) is useful for extracting salient input features, and may provide a model for aspects of biological neural network function. I describe a local synaptic Learning rule that performs stochastic gradient ascent in this information-theoretic quantity, for the case in which the input-output mapping is linear and the input signal and noise are multivariate gaussian. Feedforward connection strengths are modified by a Hebbian rule during a "learning" phase in which examples of input signal plus noise are presented to the network, and by an anti-Hebbian rule during an "unlearning" phase in which examples of noise alone are presented. Each recurrent lateral connection has two values of connection strength, one for each phase; these values are updated by an anti-Hebbian rule.


eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Zuzanna Brzosko ◽  
Sara Zannone ◽  
Wolfram Schultz ◽  
Claudia Clopath ◽  
Ole Paulsen

Spike timing-dependent plasticity (STDP) is under neuromodulatory control, which is correlated with distinct behavioral states. Previously, we reported that dopamine, a reward signal, broadens the time window for synaptic potentiation and modulates the outcome of hippocampal STDP even when applied after the plasticity induction protocol (Brzosko et al., 2015). Here, we demonstrate that sequential neuromodulation of STDP by acetylcholine and dopamine offers an efficacious model of reward-based navigation. Specifically, our experimental data in mouse hippocampal slices show that acetylcholine biases STDP toward synaptic depression, whilst subsequent application of dopamine converts this depression into potentiation. Incorporating this bidirectional neuromodulation-enabled correlational synaptic learning rule into a computational model yields effective navigation toward changing reward locations, as in natural foraging behavior. Thus, temporally sequenced neuromodulation of STDP enables associations to be made between actions and outcomes and also provides a possible mechanism for aligning the time scales of cellular and behavioral learning.


1991 ◽  
Vol 3 (3) ◽  
pp. 312-320 ◽  
Author(s):  
Graeme Mitchison

I describe a local synaptic learning rule that can be used to remove the effects of certain types of systematic temporal variation in the inputs to a unit. According to this rule, changes in synaptic weight result from a conjunction of short-term temporal changes in the inputs and the output. Formally, This is like the differential rule proposed by Klopf (1986) and Kosko (1986), except for a change of sign, which gives it an anti-Hebbian character. By itself this rule is insufficient. A weight conservation condition is needed to prevent the weights from collapsing to zero, and some further constraint—implemented here by a biasing term—to select particular sets of weights from the subspace of those which give minimal variation. As an example, I show that this rule will generate center-surround receptive fields that remove temporally varying linear gradients from the inputs.


1992 ◽  
Vol 67 (1) ◽  
pp. 67-72 ◽  
Author(s):  
L. B. Emelyanov-Yaroslavsky ◽  
V. I. Potapov

2005 ◽  
Vol 94 (4) ◽  
pp. 2275-2283 ◽  
Author(s):  
Dean V. Buonomano

Neural dynamics within recurrent cortical networks is an important component of neural processing. However, the learning rules that allow networks composed of hundreds or thousands of recurrently connected neurons to develop stable dynamical states are poorly understood. Here I use a neural network model to examine the emergence of stable dynamical states within recurrent networks. I describe a learning rule that can account both for the development of stable dynamics and guide networks to states that have been observed experimentally, specifically, states that instantiate a sparse code for time. Across trials, each neuron fires during a specific time window; by connecting the neurons to a hypothetical set of output units, it is possible to generate arbitrary spatial-temporal output patterns. Intertrial jitter of the spike time of a given neuron increases as a direct function of the delay at which it fires. These results establish a learning rule by which cortical networks can potentially process temporal information in a self-organizing manner, in the absence of specialized timing mechanisms.


2015 ◽  
Vol 62 ◽  
pp. 83-90
Author(s):  
Yoshiya Yamaguchi ◽  
Takeshi Aihara ◽  
Yutaka Sakai

Sign in / Sign up

Export Citation Format

Share Document