A Supervised Multi-spike Learning Algorithm for Recurrent Spiking Neural Networks

Author(s):  
Xianghong Lin ◽  
Guoyong Shi
2021 ◽  
pp. 1-13
Author(s):  
Qiugang Zhan ◽  
Guisong Liu ◽  
Xiurui Xie ◽  
Guolin Sun ◽  
Huajin Tang

2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Xianghong Lin ◽  
Mengwei Zhang ◽  
Xiangwen Wang

As a new brain-inspired computational model of artificial neural networks, spiking neural networks transmit and process information via precisely timed spike trains. Constructing efficient learning methods is a significant research field in spiking neural networks. In this paper, we present a supervised learning algorithm for multilayer feedforward spiking neural networks; all neurons can fire multiple spikes in all layers. The feedforward network consists of spiking neurons governed by biologically plausible long-term memory spike response model, in which the effect of earlier spikes on the refractoriness is not neglected to incorporate adaptation effects. The gradient descent method is employed to derive synaptic weight updating rule for learning spike trains. The proposed algorithm is tested and verified on spatiotemporal pattern learning problems, including a set of spike train learning tasks and nonlinear pattern classification problems on four UCI datasets. Simulation results indicate that the proposed algorithm can improve learning accuracy in comparison with other supervised learning algorithms.


2013 ◽  
Vol 25 (2) ◽  
pp. 473-509 ◽  
Author(s):  
Ioana Sporea ◽  
André Grüning

We introduce a supervised learning algorithm for multilayer spiking neural networks. The algorithm overcomes a limitation of existing learning algorithms: it can be applied to neurons firing multiple spikes in artificial neural networks with hidden layers. It can also, in principle, be used with any linearizable neuron model and allows different coding schemes of spike train patterns. The algorithm is applied successfully to classic linearly nonseparable benchmarks such as the XOR problem and the Iris data set, as well as to more complex classification and mapping problems. The algorithm has been successfully tested in the presence of noise, requires smaller networks than reservoir computing, and results in faster convergence than existing algorithms for similar tasks such as SpikeProp.


Author(s):  
Filip Ponulak

Analysis of the ReSuMe Learning Process For Spiking Neural NetworksIn this paper we perform an analysis of the learning process with the ReSuMe method and spiking neural networks (Ponulak, 2005; Ponulak, 2006b). We investigate how the particular parameters of the learning algorithm affect the process of learning. We consider the issue of speeding up the adaptation process, while maintaining the stability of the optimal solution. This is an important issue in many real-life tasks where the neural networks are applied and where the fast learning convergence is highly desirable.


Sign in / Sign up

Export Citation Format

Share Document