What Can a Neuron Learn with Spike-Timing-Dependent Plasticity?

2005 ◽  
Vol 17 (11) ◽  
pp. 2337-2382 ◽  
Author(s):  
Robert Legenstein ◽  
Christian Naeger ◽  
Wolfgang Maass

Spiking neurons are very flexible computational modules, which can implement with different values of their adjustable synaptic parameters an enormous variety of different transformations F from input spike trains to output spike trains. We examine in this letter the question to what extent a spiking neuron with biologically realistic models for dynamic synapses can be taught via spike-timing-dependent plasticity (STDP) to implement a given transformation F. We consider a supervised learning paradigm where during training, the output of the neuron is clamped to the target signal (teacher forcing). The well-known perceptron convergence theorem asserts the convergence of a simple supervised learning algorithm for drastically simplified neuron models (McCulloch-Pitts neurons). We show that in contrast to the perceptron convergence theorem, no theoretical guarantee can be given for the convergence of STDP with teacher forcing that holds for arbitrary input spike patterns. On the other hand, we prove that average case versions of the perceptron convergence theorem hold for STDP in the case of uncorrelated and correlated Poisson input spike trains and simple models for spiking neurons. For a wide class of cross-correlation functions of the input spike trains, the resulting necessary and sufficient condition can be formulated in terms of linear separability, analogously as the well-known condition of learnability by perceptrons. However, the linear separability criterion has to be applied here to the columns of the correlation matrix of the Poisson input. We demonstrate through extensive computer simulations that the theoretically predicted convergence of STDP with teacher forcing also holds for more realistic models for neurons, dynamic synapses, and more general input distributions. In addition, we show through computer simulations that these positive learning results hold not only for the common interpretation of STDP, where STDP changes the weights of synapses, but also for a more realistic interpretation suggested by experimental data where STDP modulates the initial release probability of dynamic synapses.

2009 ◽  
Vol 21 (2) ◽  
pp. 340-352 ◽  
Author(s):  
Robert Urbanczik ◽  
Walter Senn

We introduce a new supervised learning rule for the tempotron task: the binary classification of input spike trains by an integrate-and-fire neuron that encodes its decision by firing or not firing. The rule is based on the gradient of a cost function, is found to have enhanced performance, and does not rely on a specific reset mechanism in the integrate-and-fire neuron.


2012 ◽  
Vol 2012 ◽  
pp. 1-16 ◽  
Author(s):  
X. Zhang ◽  
G. Foderaro ◽  
C. Henriquez ◽  
A. M. J. VanDongen ◽  
S. Ferrari

This paper presents a deterministic and adaptive spike model derived from radial basis functions and a leaky integrate-and-fire sampler developed for training spiking neural networks without direct weight manipulation. Several algorithms have been proposed for training spiking neural networks through biologically-plausible learning mechanisms, such as spike-timing-dependent synaptic plasticity and Hebbian plasticity. These algorithms typically rely on the ability to update the synaptic strengths, or weights, directly, through a weight update rule in which the weight increment can be decided and implemented based on the training equations. However, in several potential applications of adaptive spiking neural networks, including neuroprosthetic devices and CMOS/memristor nanoscale neuromorphic chips, the weights cannot be manipulated directly and, instead, tend to change over time by virtue of the pre- and postsynaptic neural activity. This paper presents an indirect learning method that induces changes in the synaptic weights by modulating spike-timing-dependent plasticity by means of controlled input spike trains. In place of the weights, the algorithm manipulates the input spike trains used to stimulate the input neurons by determining a sequence of spike timings that minimize a desired objective function and, indirectly, induce the desired synaptic plasticity in the network.


2011 ◽  
Vol 2011 ◽  
pp. 1-12 ◽  
Author(s):  
Karim El-Laithy ◽  
Martin Bogdan

An integration of both the Hebbian-based and reinforcement learning (RL) rules is presented for dynamic synapses. The proposed framework permits the Hebbian rule to update the hidden synaptic model parameters regulating the synaptic response rather than the synaptic weights. This is performed using both the value and the sign of the temporal difference in the reward signal after each trial. Applying this framework, a spiking network with spike-timing-dependent synapses is tested to learn the exclusive-OR computation on a temporally coded basis. Reward values are calculated with the distance between the output spike train of the network and a reference target one. Results show that the network is able to capture the required dynamics and that the proposed framework can reveal indeed an integrated version of Hebbian and RL. The proposed framework is tractable and less computationally expensive. The framework is applicable to a wide class of synaptic models and is not restricted to the used neural representation. This generality, along with the reported results, supports adopting the introduced approach to benefit from the biologically plausible synaptic models in a wide range of intuitive signal processing.


2015 ◽  
Vol 25 (07) ◽  
pp. 1540005
Author(s):  
Ilya Prokin ◽  
Ivan Tyukin ◽  
Victor Kazantsev

The work investigates the influence of spike-timing dependent plasticity (STDP) mechanisms on the dynamics of two synaptically coupled neurons driven by additive external noise. In this setting, the noise signal models synaptic inputs that the pair receives from other neurons in a larger network. We show that in the absence of STDP feedbacks the pair of neurons exhibit oscillations and intermittent synchronization. When the synapse connecting the neurons is supplied with a phase selective feedback mechanism simulating STDP, induced dynamics of spikes in the coupled system resembles a phase locked mode with time lags between spikes oscillating about a specific value. This value, as we show by extensive numerical simulations, can be set arbitrary within a broad interval by tuning parameters of the STDP feedback.


1993 ◽  
Vol 5 (6) ◽  
pp. 869-884 ◽  
Author(s):  
David S. Touretzky ◽  
A. David Redish ◽  
Hank S. Wan

O'Keefe (1991) has proposed that spatial information in rats might be represented as phasors: phase and amplitude of a sine wave encoding angle and distance to a landmark. We describe computer simulations showing that operations on phasors can be efficiently realized by arrays of spiking neurons that recode the temporal dimension of the sine wave spatially. Some cells in motor and parietal cortex exhibit response properties compatible with this proposal.


2008 ◽  
Vol 20 (4) ◽  
pp. 974-993 ◽  
Author(s):  
Arunava Banerjee ◽  
Peggy Seriès ◽  
Alexandre Pouget

Several recent models have proposed the use of precise timing of spikes for cortical computation. Such models rely on growing experimental evidence that neurons in the thalamus as well as many primary sensory cortical areas respond to stimuli with remarkable temporal precision. Models of computation based on spike timing, where the output of the network is a function not only of the input but also of an independently initializable internal state of the network, must, however, satisfy a critical constraint: the dynamics of the network should not be sensitive to initial conditions. We have previously developed an abstract dynamical system for networks of spiking neurons that has allowed us to identify the criterion for the stationary dynamics of a network to be sensitive to initial conditions. Guided by this criterion, we analyzed the dynamics of several recurrent cortical architectures, including one from the orientation selectivity literature. Based on the results, we conclude that under conditions of sustained, Poisson-like, weakly correlated, low to moderate levels of internal activity as found in the cortex, it is unlikely that recurrent cortical networks can robustly generate precise spike trajectories, that is, spatiotemporal patterns of spikes precise to the millisecond timescale.


Sensors ◽  
2020 ◽  
Vol 20 (2) ◽  
pp. 500 ◽  
Author(s):  
Sergey A. Lobov ◽  
Andrey V. Chernyshov ◽  
Nadia P. Krilova ◽  
Maxim O. Shamshin ◽  
Victor B. Kazantsev

One of the modern trends in the design of human–machine interfaces (HMI) is to involve the so called spiking neuron networks (SNNs) in signal processing. The SNNs can be trained by simple and efficient biologically inspired algorithms. In particular, we have shown that sensory neurons in the input layer of SNNs can simultaneously encode the input signal based both on the spiking frequency rate and on varying the latency in generating spikes. In the case of such mixed temporal-rate coding, the SNN should implement learning working properly for both types of coding. Based on this, we investigate how a single neuron can be trained with pure rate and temporal patterns, and then build a universal SNN that is trained using mixed coding. In particular, we study Hebbian and competitive learning in SNN in the context of temporal and rate coding problems. We show that the use of Hebbian learning through pair-based and triplet-based spike timing-dependent plasticity (STDP) rule is accomplishable for temporal coding, but not for rate coding. Synaptic competition inducing depression of poorly used synapses is required to ensure a neural selectivity in the rate coding. This kind of competition can be implemented by the so-called forgetting function that is dependent on neuron activity. We show that coherent use of the triplet-based STDP and synaptic competition with the forgetting function is sufficient for the rate coding. Next, we propose a SNN capable of classifying electromyographical (EMG) patterns using an unsupervised learning procedure. The neuron competition achieved via lateral inhibition ensures the “winner takes all” principle among classifier neurons. The SNN also provides gradual output response dependent on muscular contraction strength. Furthermore, we modify the SNN to implement a supervised learning method based on stimulation of the target classifier neuron synchronously with the network input. In a problem of discrimination of three EMG patterns, the SNN with supervised learning shows median accuracy 99.5% that is close to the result demonstrated by multi-layer perceptron learned by back propagation of an error algorithm.


Sign in / Sign up

Export Citation Format

Share Document