scholarly journals Supervised learning in spiking neural networks with synaptic delay-weight plasticity

2020 ◽  
Vol 409 ◽  
pp. 103-118
Author(s):  
Malu Zhang ◽  
Jibin Wu ◽  
Ammar Belatreche ◽  
Zihan Pan ◽  
Xiurui Xie ◽  
...  
2021 ◽  
Vol 15 (8) ◽  
pp. 854-865
Author(s):  
Yawen Lan ◽  
Qiang Li

Throughout the central nervous system (CNS), the information communicated between neurons is mainly implemented by the action potentials (or spikes). Although the spike-timing based neuronal codes have significant computational advantages over rate encoding scheme, the exact spike timing-based learning mechanism in the brain remains an open question. To close this gap, many weight-based supervised learning algorithms have been proposed for spiking neural networks. However, it is insufficient to consider only synaptic weight plasticity, and biological evidence suggest that the synaptic delay plasticity also plays an important role in the learning progress in biological neural networks. Recently, many learning algorithms have been proposed to consider both the synaptic weight plasticity and synaptic delay plasticity. The goal of this paper is to give an overview of the existing synaptic delay-based learning algorithms in spiking neural networks. We described the typical learning algorithms and reported the experimental results. Finally, we discussed the properties and limitations of each algorithm and made a comparison among them.


2014 ◽  
Vol 144 ◽  
pp. 526-536 ◽  
Author(s):  
Jinling Wang ◽  
Ammar Belatreche ◽  
Liam Maguire ◽  
Thomas Martin McGinnity

2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Xianghong Lin ◽  
Mengwei Zhang ◽  
Xiangwen Wang

As a new brain-inspired computational model of artificial neural networks, spiking neural networks transmit and process information via precisely timed spike trains. Constructing efficient learning methods is a significant research field in spiking neural networks. In this paper, we present a supervised learning algorithm for multilayer feedforward spiking neural networks; all neurons can fire multiple spikes in all layers. The feedforward network consists of spiking neurons governed by biologically plausible long-term memory spike response model, in which the effect of earlier spikes on the refractoriness is not neglected to incorporate adaptation effects. The gradient descent method is employed to derive synaptic weight updating rule for learning spike trains. The proposed algorithm is tested and verified on spatiotemporal pattern learning problems, including a set of spike train learning tasks and nonlinear pattern classification problems on four UCI datasets. Simulation results indicate that the proposed algorithm can improve learning accuracy in comparison with other supervised learning algorithms.


2013 ◽  
Vol 25 (2) ◽  
pp. 473-509 ◽  
Author(s):  
Ioana Sporea ◽  
André Grüning

We introduce a supervised learning algorithm for multilayer spiking neural networks. The algorithm overcomes a limitation of existing learning algorithms: it can be applied to neurons firing multiple spikes in artificial neural networks with hidden layers. It can also, in principle, be used with any linearizable neuron model and allows different coding schemes of spike train patterns. The algorithm is applied successfully to classic linearly nonseparable benchmarks such as the XOR problem and the Iris data set, as well as to more complex classification and mapping problems. The algorithm has been successfully tested in the presence of noise, requires smaller networks than reservoir computing, and results in faster convergence than existing algorithms for similar tasks such as SpikeProp.


Sign in / Sign up

Export Citation Format

Share Document