scholarly journals Hebbian Imprinting and Retrieval in Oscillatory Neural Networks

2002 ◽  
Vol 14 (10) ◽  
pp. 2371-2396 ◽  
Author(s):  
Silvia Scarpetta ◽  
L. Zhaoping ◽  
John Hertz

We introduce a model of generalized Hebbian learning and retrieval in oscillatory neural networks modeling cortical areas such as hippocampus and olfactory cortex. Recent experiments have shown that synaptic plasticity depends on spike timing, especially on synapses from excitatory pyramidal cells, in hippocampus, and in sensory and cerebellar cortex. Here we study how such plasticity can be used to form memories and input representations when the neural dynamics are oscillatory, as is common in the brain (particularly in the hippocampus and olfactory cortex). Learning is assumed to occur in a phase of neural plasticity, in which the network is clamped to external teaching signals. By suitable manipulation of the nonlinearity of the neurons or the oscillation frequencies during learning, the model can be made, in a retrieval phase, either to categorize new inputs or to map them, in a continuous fashion, onto the space spanned by the imprinted patterns. We identify the first of these possibilities with the function of olfactory cortex and the second with the observed response characteristics of place cells in hippocampus. We investigate both kinds of networks analytically and by computer simulations, and we link the models with experimental findings, exploring, in particular, how the spike timing dependence of the synaptic plasticity constrains the computational function of the network and vice versa.

2015 ◽  
Vol 9 ◽  
Author(s):  
Runchun M. Wang ◽  
Tara J. Hamilton ◽  
Jonathan C. Tapson ◽  
André van Schaik

2010 ◽  
Vol 22 (8) ◽  
pp. 2059-2085 ◽  
Author(s):  
Daniel Bush ◽  
Andrew Philippides ◽  
Phil Husbands ◽  
Michael O'Shea

Rate-coded Hebbian learning, as characterized by the BCM formulation, is an established computational model of synaptic plasticity. Recently it has been demonstrated that changes in the strength of synapses in vivo can also depend explicitly on the relative timing of pre- and postsynaptic firing. Computational modeling of this spike-timing-dependent plasticity (STDP) has demonstrated that it can provide inherent stability or competition based on local synaptic variables. However, it has also been demonstrated that these properties rely on synaptic weights being either depressed or unchanged by an increase in mean stochastic firing rates, which directly contradicts empirical data. Several analytical studies have addressed this apparent dichotomy and identified conditions under which distinct and disparate STDP rules can be reconciled with rate-coded Hebbian learning. The aim of this research is to verify, unify, and expand on these previous findings by manipulating each element of a standard computational STDP model in turn. This allows us to identify the conditions under which this plasticity rule can replicate experimental data obtained using both rate and temporal stimulation protocols in a spiking recurrent neural network. Our results describe how the relative scale of mean synaptic weights and their dependence on stochastic pre- or postsynaptic firing rates can be manipulated by adjusting the exact profile of the asymmetric learning window and temporal restrictions on spike pair interactions respectively. These findings imply that previously disparate models of rate-coded autoassociative learning and temporally coded heteroassociative learning, mediated by symmetric and asymmetric connections respectively, can be implemented in a single network using a single plasticity rule. However, we also demonstrate that forms of STDP that can be reconciled with rate-coded Hebbian learning do not generate inherent synaptic competition, and thus some additional mechanism is required to guarantee long-term input-output selectivity.


10.1038/78829 ◽  
2000 ◽  
Vol 3 (9) ◽  
pp. 919-926 ◽  
Author(s):  
Sen Song ◽  
Kenneth D. Miller ◽  
L. F. Abbott

2013 ◽  
Author(s):  
B. E. P. Mizusaki ◽  
E. J. Agnes ◽  
L. G. Brunnet ◽  
R. Erichsen Jr.

2002 ◽  
Vol 10 (3-4) ◽  
pp. 243-263 ◽  
Author(s):  
Ezequiel Di Paolo

Plastic spiking neural networks are synthesized for phototactic robots using evolutionary techniques. Synaptic plasticity asymmetrically depends on the precise relative timing between presynaptic and postsynaptic spikes at the millisecond range and on longer-term activity-dependent regulatory scaling. Comparative studies have been carried out for different kinds of plastic neural networks with low and high levels of neural noise. In all cases, the evolved controllers are highly robust against internal synaptic decay and other perturbations. The importance of the precise timing of spikes is demonstrated by randomizing the spike trains. In the low neural noise scenario, weight values undergo rhythmic changes at the mesoscale due to bursting, but during periods of high activity they are finely regulated at the microscale by synchronous or entrained firing. Spike train randomization results in loss of performance in this case. In contrast, in the high neural noise scenario, robots are robust to loss of information in the timing of the spike trains, demonstrating the counterintuitive results that plasticity, which is dependent on precise spike timing, can work even in its absence, provided the behavioral strategies make use of robust longer-term invariants of sensorimotor interaction. A comparison with a rate-based model of synaptic plasticity shows that under similarly noisy conditions, asymmetric spike-timing dependent plasticity achieves better performance by means of efficient reduction in weight variance over time. Performance also presents negative sensitivity to reduced levels of noise, showing that random firing has a functional value.


2020 ◽  
Author(s):  
Katharina Anna Wilmes ◽  
Claudia Clopath

With Hebbian learning 'who fires together wires together', well-known problems arise. On the one hand, plasticity can lead to unstable network dynamics, manifesting as run-away activity or silence. On the other hand, plasticity can erase or overwrite stored memories. Unstable dynamics can partly be addressed with homeostatic plasticity mechanisms. Unfortunately, the time constants of homeostatic mechanisms required in network models are much shorter than what has been measured experimentally. Here, we propose that homeostatic time constants can be slow if plasticity is gated. We investigate how the gating of plasticity influences the stability of network activity and stored memories. We use plastic balanced spiking neural networks consisting of excitatory neurons with a somatic and a dendritic compartment (which resemble cortical pyramidal cells in their firing properties), and inhibitory neurons targeting those compartments. We compare how different factors such as excitability, learning rate, and inhibition can lift the requirements for the critical time constant of homeostatic plasticity. We specifically investigate how gating of dendritic versus somatic plasticity allows for different amounts of weight changes in networks with the same critical homeostatic time constant. We suggest that the striking compartmentalisation of pyramidal cells and their inhibitory inputs enable large synaptic changes at the dendrite while maintaining network stability. We additionally show that spatially restricted plasticity in a subpopulation of the network improves stability. Finally, we compare how the different gates affect the stability of memories in the network.


2012 ◽  
Vol 2012 ◽  
pp. 1-16 ◽  
Author(s):  
X. Zhang ◽  
G. Foderaro ◽  
C. Henriquez ◽  
A. M. J. VanDongen ◽  
S. Ferrari

This paper presents a deterministic and adaptive spike model derived from radial basis functions and a leaky integrate-and-fire sampler developed for training spiking neural networks without direct weight manipulation. Several algorithms have been proposed for training spiking neural networks through biologically-plausible learning mechanisms, such as spike-timing-dependent synaptic plasticity and Hebbian plasticity. These algorithms typically rely on the ability to update the synaptic strengths, or weights, directly, through a weight update rule in which the weight increment can be decided and implemented based on the training equations. However, in several potential applications of adaptive spiking neural networks, including neuroprosthetic devices and CMOS/memristor nanoscale neuromorphic chips, the weights cannot be manipulated directly and, instead, tend to change over time by virtue of the pre- and postsynaptic neural activity. This paper presents an indirect learning method that induces changes in the synaptic weights by modulating spike-timing-dependent plasticity by means of controlled input spike trains. In place of the weights, the algorithm manipulates the input spike trains used to stimulate the input neurons by determining a sequence of spike timings that minimize a desired objective function and, indirectly, induce the desired synaptic plasticity in the network.


Sign in / Sign up

Export Citation Format

Share Document