scholarly journals Method for Training a Spiking Neuron to Associate Input-Output Spike Trains

Author(s):  
Ammar Mohemmed ◽  
Stefan Schliebs ◽  
Satoshi Matsuda ◽  
Nikola Kasabov
2006 ◽  
Vol 1291 ◽  
pp. 225-228
Author(s):  
Hiroyuki Torikai ◽  
Toshimichi Saito
Keyword(s):  

2004 ◽  
Vol 16 (10) ◽  
pp. 2125-2195 ◽  
Author(s):  
B. Scott Jackson

Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (non-Poissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have longrange dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.


2008 ◽  
Vol 21 (2-3) ◽  
pp. 140-149 ◽  
Author(s):  
Hiroyuki Torikai ◽  
Atsuo Funew ◽  
Toshimichi Saito
Keyword(s):  

2009 ◽  
Vol 65 ◽  
pp. S61
Author(s):  
Yasuhiro Nishigaki ◽  
Jun-nosuke Teramae ◽  
Tomoki Fukai

2013 ◽  
Vol 107 ◽  
pp. 3-10 ◽  
Author(s):  
Ammar Mohemmed ◽  
Stefan Schliebs ◽  
Satoshi Matsuda ◽  
Nikola Kasabov

2012 ◽  
Vol 22 (04) ◽  
pp. 1250012 ◽  
Author(s):  
AMMAR MOHEMMED ◽  
STEFAN SCHLIEBS ◽  
SATOSHI MATSUDA ◽  
NIKOLA KASABOV

Spiking Neural Networks (SNN) were shown to be suitable tools for the processing of spatio-temporal information. However, due to their inherent complexity, the formulation of efficient supervised learning algorithms for SNN is difficult and remains an important problem in the research area. This article presents SPAN — a spiking neuron that is able to learn associations of arbitrary spike trains in a supervised fashion allowing the processing of spatio-temporal information encoded in the precise timing of spikes. The idea of the proposed algorithm is to transform spike trains during the learning phase into analog signals so that common mathematical operations can be performed on them. Using this conversion, it is possible to apply the well-known Widrow–Hoff rule directly to the transformed spike trains in order to adjust the synaptic weights and to achieve a desired input/output spike behavior of the neuron. In the presented experimental analysis, the proposed learning algorithm is evaluated regarding its learning capabilities, its memory capacity, its robustness to noisy stimuli and its classification performance. Differences and similarities of SPAN regarding two related algorithms, ReSuMe and Chronotron, are discussed.


Sign in / Sign up

Export Citation Format

Share Document