scholarly journals Optimality Model of Unsupervised Spike-Timing-Dependent Plasticity: Synaptic Memory and Weight Distribution

2007 ◽  
Vol 19 (3) ◽  
pp. 639-671 ◽  
Author(s):  
Taro Toyoizumi ◽  
Jean-Pascal Pfister ◽  
Kazuyuki Aihara ◽  
Wulfram Gerstner

We studied the hypothesis that synaptic dynamics is controlled by three basic principles: (1) synapses adapt their weights so that neurons can effectively transmit information, (2) homeostatic processes stabilize the mean firing rate of the postsynaptic neuron, and (3) weak synapses adapt more slowly than strong ones, while maintenance of strong synapses is costly. Our results show that a synaptic update rule derived from these principles shares features, with spike-timing-dependent plasticity, is sensitive to correlations in the input and is useful for synaptic memory. Moreover, input selectivity (sharply tuned receptive fields) of postsynaptic neurons develops only if stimuli with strong features are presented. Sharply tuned neurons can coexist with unselective ones, and the distribution of synaptic weights can be unimodal or bimodal. The formulation of synaptic dynamics through an optimality criterion provides a simple graphical argument for the stability of synapses, necessary for synaptic memory.

2006 ◽  
Vol 18 (10) ◽  
pp. 2414-2464 ◽  
Author(s):  
Peter A. Appleby ◽  
Terry Elliott

In earlier work we presented a stochastic model of spike-timing-dependent plasticity (STDP) in which STDP emerges only at the level of temporal or spatial synaptic ensembles. We derived the two-spike interaction function from this model and showed that it exhibits an STDP-like form. Here, we extend this work by examining the general n-spike interaction functions that may be derived from the model. A comparison between the two-spike interaction function and the higher-order interaction functions reveals profound differences. In particular, we show that the two-spike interaction function cannot support stable, competitive synaptic plasticity, such as that seen during neuronal development, without including modifications designed specifically to stabilize its behavior. In contrast, we show that all the higher-order interaction functions exhibit a fixed-point structure consistent with the presence of competitive synaptic dynamics. This difference originates in the unification of our proposed “switch” mechanism for synaptic plasticity, coupling synaptic depression and synaptic potentiation processes together. While three or more spikes are required to probe this coupling, two spikes can never do so. We conclude that this coupling is critical to the presence of competitive dynamics and that multispike interactions are therefore vital to understanding synaptic competition.


2009 ◽  
Vol 21 (12) ◽  
pp. 3363-3407 ◽  
Author(s):  
Terry Elliott ◽  
Konstantinos Lagogiannis

A stochastic model of spike-timing-dependent plasticity proposes that single synapses express fixed-amplitude jumps in strength, the amplitudes being independent of the spike time difference. However, the probability that a jump in strength occurs does depend on spike timing. Although the model has a number of desirable features, the stochasticity of response of a synapse introduces potentially large fluctuations into changes in synaptic strength. These can destabilize the segregated patterns of afferent connectivity characteristic of neuronal development. Previously we have taken these jumps to be small relative to overall synaptic strengths to control fluctuations, but doing so increases developmental timescales unacceptably. Here, we explore three alternative ways of taming fluctuations. First, a calculation of the variance for the change in synaptic strength shows that the mean change eventually dominates fluctuations, but on timescales that are too long. Second, it is possible that fluctuations in strength may cancel between synapses, but we show that correlations between synapses emasculate the law of large numbers. Finally, by separating plasticity induction and expression, we introduce a temporal window during which induction signals are low-pass-filtered before expression. In this way, fluctuations in strength are tamed, stabilizing segregated states of afferent connectivity.


2008 ◽  
Vol 20 (9) ◽  
pp. 2253-2307 ◽  
Author(s):  
Terry Elliott

In a recently proposed, stochastic model of spike-timing-dependent plasticity, we derived general expressions for the expected change in synaptic strength, ΔSn, induced by a typical sequence of precisely n spikes. We found that the rules ΔSn, n ≥ 3, exhibit regions of parameter space in which stable, competitive interactions between afferents are present, leading to the activity-dependent segregation of afferents on their targets. The rules ΔSn, however, allow an indefinite period of time to elapse for the occurrence of precisely n spikes, while most measurements of changes in synaptic strength are conducted over definite periods of time during which a potentially unknown number of spikes may occur. Here, therefore, we derive an expression, ΔS(t), for the expected change in synaptic strength of a synapse experiencing an average sequence of spikes of typical length occurring during a fixed period of time, t. We find that the resulting synaptic plasticity rule Δ S(t) exhibits a number of remarkable properties. It is an entirely self-stabilizing learning rule in all regions of parameter space. Further, its parameter space is carved up into three distinct, contiguous regions in which the exhibited synaptic interactions undergo different transitions as the time t is increased. In one region, the synaptic dynamics change from noncompetitive to competitive to entirely depressing. In a second region, the dynamics change from noncompetitive to competitive without the second transition to entirely depressing dynamics. In a third region, the dynamics are always noncompetitive. The locations of these regions are not fixed in parameter space but may be modified by changing the mean presynaptic firing rates. Thus, neurons may be moved among these three different regions and so exhibit different sets of synaptic dynamics depending on their mean firing rates.


Laser Physics ◽  
2021 ◽  
Vol 32 (1) ◽  
pp. 016201
Author(s):  
Tao Tian ◽  
Zhengmao Wu ◽  
Xiaodong Lin ◽  
Xi Tang ◽  
Ziye Gao ◽  
...  

Abstract Based on the well-known Fabry–Pérot approach, after taking into account the variation of bias current of the vertical-cavity semiconductor optical amplifier (VCSOA) according to the present synapse weight, we implement the optical spike timing dependent plasticity (STDP) with weight-dependent learning window in a VCSOA with double optical spike injections, and numerically investigate the corresponding weight-dependent STDP characteristics. The simulation results show that, the bias current of VCSOA has significant effect on the optical STDP curve. After introducing an adaptive variation of the bias current according to the present synapse weight, the optical weight-dependent STDP based on VCSOA can be realized. Moreover, the weight training based on the optical weight-dependent STDP can be effectively controlled by adjusting some typical external or intrinsic parameters and the excessive adjusting of synaptic weight is avoided, which can be used to balance the stability and competition among synapses and pave a way for the future large-scale energy efficient optical spiking neural networks based on the weight-dependent STDP learning mechanism.


2009 ◽  
Vol 101 (6) ◽  
pp. 2775-2788 ◽  
Author(s):  
Guy Billings ◽  
Mark C. W. van Rossum

Memory systems should be plastic to allow for learning; however, they should also retain earlier memories. Here we explore how synaptic weights and memories are retained in models of single neurons and networks equipped with spike-timing-dependent plasticity. We show that for single neuron models, the precise learning rule has a strong effect on the memory retention time. In particular, a soft-bound, weight-dependent learning rule has a very short retention time as compared with a learning rule that is independent of the synaptic weights. Next, we explore how the retention time is reflected in receptive field stability in networks. As in the single neuron case, the weight-dependent learning rule yields less stable receptive fields than a weight-independent rule. However, receptive fields stabilize in the presence of sufficient lateral inhibition, demonstrating that plasticity in networks can be regulated by inhibition and suggesting a novel role for inhibition in neural circuits.


Sign in / Sign up

Export Citation Format

Share Document