scholarly journals A synaptic learning rule for exploiting nonlinear dendritic computation

Neuron ◽  
2021 ◽  
Author(s):  
Brendan A. Bicknell ◽  
Michael Häusser
eLife ◽  
2017 ◽  
Vol 6 ◽  
Author(s):  
Zuzanna Brzosko ◽  
Sara Zannone ◽  
Wolfram Schultz ◽  
Claudia Clopath ◽  
Ole Paulsen

Spike timing-dependent plasticity (STDP) is under neuromodulatory control, which is correlated with distinct behavioral states. Previously, we reported that dopamine, a reward signal, broadens the time window for synaptic potentiation and modulates the outcome of hippocampal STDP even when applied after the plasticity induction protocol (Brzosko et al., 2015). Here, we demonstrate that sequential neuromodulation of STDP by acetylcholine and dopamine offers an efficacious model of reward-based navigation. Specifically, our experimental data in mouse hippocampal slices show that acetylcholine biases STDP toward synaptic depression, whilst subsequent application of dopamine converts this depression into potentiation. Incorporating this bidirectional neuromodulation-enabled correlational synaptic learning rule into a computational model yields effective navigation toward changing reward locations, as in natural foraging behavior. Thus, temporally sequenced neuromodulation of STDP enables associations to be made between actions and outcomes and also provides a possible mechanism for aligning the time scales of cellular and behavioral learning.


1991 ◽  
Vol 3 (3) ◽  
pp. 312-320 ◽  
Author(s):  
Graeme Mitchison

I describe a local synaptic learning rule that can be used to remove the effects of certain types of systematic temporal variation in the inputs to a unit. According to this rule, changes in synaptic weight result from a conjunction of short-term temporal changes in the inputs and the output. Formally, This is like the differential rule proposed by Klopf (1986) and Kosko (1986), except for a change of sign, which gives it an anti-Hebbian character. By itself this rule is insufficient. A weight conservation condition is needed to prevent the weights from collapsing to zero, and some further constraint—implemented here by a biasing term—to select particular sets of weights from the subspace of those which give minimal variation. As an example, I show that this rule will generate center-surround receptive fields that remove temporally varying linear gradients from the inputs.


2015 ◽  
Vol 62 ◽  
pp. 83-90
Author(s):  
Yoshiya Yamaguchi ◽  
Takeshi Aihara ◽  
Yutaka Sakai

1992 ◽  
Vol 4 (5) ◽  
pp. 691-702 ◽  
Author(s):  
Ralph Linsker

A network that develops to maximize the mutual information between its output and the signal portion of its input (which is admixed with noise) is useful for extracting salient input features, and may provide a model for aspects of biological neural network function. I describe a local synaptic Learning rule that performs stochastic gradient ascent in this information-theoretic quantity, for the case in which the input-output mapping is linear and the input signal and noise are multivariate gaussian. Feedforward connection strengths are modified by a Hebbian rule during a "learning" phase in which examples of input signal plus noise are presented to the network, and by an anti-Hebbian rule during an "unlearning" phase in which examples of noise alone are presented. Each recurrent lateral connection has two values of connection strength, one for each phase; these values are updated by an anti-Hebbian rule.


2011 ◽  
Vol 21 (05) ◽  
pp. 415-425 ◽  
Author(s):  
FANG HAN ◽  
MARIAN WIERCIGROCH ◽  
JIAN-AN FANG ◽  
ZHIJIE WANG

Excitement and synchronization of electrically and chemically coupled Newman-Watts (NW) small-world neuronal networks with a short-term synaptic plasticity described by a modified Oja learning rule are investigated. For each type of neuronal network, the variation properties of synaptic weights are examined first. Then the effects of the learning rate, the coupling strength and the shortcut-adding probability on excitement and synchronization of the neuronal network are studied. It is shown that the synaptic learning suppresses the over-excitement, helps synchronization for the electrically coupled network but impairs synchronization for the chemically coupled one. Both the introduction of shortcuts and the increase of the coupling strength improve synchronization and they are helpful in increasing the excitement for the chemically coupled network, but have little effect on the excitement of the electrically coupled one.


2017 ◽  
Vol 29 (9) ◽  
pp. 2528-2552 ◽  
Author(s):  
Sensen Liu ◽  
ShiNung Ching

We consider the problem of optimizing information-theoretic quantities in recurrent networks via synaptic learning. In contrast to feedforward networks, the recurrence presents a key challenge insofar as an optimal learning rule must aggregate the joint distribution of the whole network. This challenge, in particular, makes a local policy (i.e., one that depends on only pairwise interactions) difficult. Here, we report a local metaplastic learning rule that performs approximate optimization by estimating whole-network statistics through the use of several slow, nested dynamical variables. These dynamics provide the rule with both anti-Hebbian and Hebbian components, thus allowing for decorrelating and correlating learning regimes that can occur when either is favorable for optimality. We demonstrate the performance of the synthesized rule in comparison to classical BCM dynamics and use the networks to conduct history-dependent tasks that highlight the advantages of recurrence. Finally, we show the consistency of the resultant learned networks with notions of criticality, including balanced ratios of excitation and inhibition.


Sign in / Sign up

Export Citation Format

Share Document