scholarly journals Self-Evolutionary Neuron Model for Fast-Response Spiking Neural Networks

Author(s):  
Anguo Zhang ◽  
Ying Han ◽  
Jing Hu ◽  
Yuzhen Niu ◽  
Yueming Gao ◽  
...  

We propose two simple and effective spiking neuron models to improve the response time of the conventional spiking neural network. The proposed neuron models adaptively tune the presynaptic input current depending on the input received from its presynapses and subsequent neuron firing events. We analyze and derive the firing activity homeostatic convergence of the proposed models. We experimentally verify and compare the models on MNIST handwritten digits and FashionMNIST classification tasks. We show that the proposed neuron models significantly increase the response speed to the input signal.

2015 ◽  
Vol 14 (04) ◽  
pp. 1550034 ◽  
Author(s):  
Alexander Vidybida

We consider a class of spiking neuron models, defined by a set of conditions which are typical for basic threshold-type models like leaky integrate-and-fire, or binding neuron model and also for some artificial neurons. A neuron is fed with a point renewal process. A relation between the three probability density functions (PDF): (i) PDF of input interspike intervals ISIs, (ii) PDF of output interspike intervals of a neuron with a feedback and (iii) PDF for that same neuron without feedback is derived. This allows to calculate any one of the three PDFs provided the remaining two are given. Similar relation between corresponding means and variances is derived. The relations are checked exactly for the binding neuron model stimulated with Poisson stream.


Author(s):  
Wulfram Gerstner ◽  
Werner M. Kistler

2003 ◽  
Vol 2 (3) ◽  
pp. 158-164 ◽  
Author(s):  
T. Morie ◽  
T. Matsuura ◽  
M. Nagata ◽  
A. Iwata

2020 ◽  
Vol 32 (7) ◽  
pp. 1408-1429
Author(s):  
Jakub Fil ◽  
Dominique Chu

The multispike tempotron (MST) is a powersul, single spiking neuron model that can solve complex supervised classification tasks. It is also internally complex, computationally expensive to evaluate, and unsuitable for neuromorphic hardware. Here we aim to understand whether it is possible to simplify the MST model while retaining its ability to learn and process information. To this end, we introduce a family of generalized neuron models (GNMs) that are a special case of the spike response model and much simpler and cheaper to simulate than the MST. We find that over a wide range of parameters, the GNM can learn at least as well as the MST does. We identify the temporal autocorrelation of the membrane potential as the most important ingredient of the GNM that enables it to classify multiple spatiotemporal patterns. We also interpret the GNM as a chemical system, thus conceptually bridging computation by neural networks with molecular information processing. We conclude the letter by proposing alternative training approaches for the GNM, including error trace learning and error backpropagation.


2015 ◽  
Vol 5 (2) ◽  
pp. 109-119 ◽  
Author(s):  
Sou Nobukawa ◽  
Haruhiko Nishimura ◽  
Teruya Yamanishi ◽  
Jian-Qin Liu

Abstract Several hybrid neuron models, which combine continuous spike-generation mechanisms and discontinuous resetting process after spiking, have been proposed as a simple transition scheme for membrane potential between spike and hyperpolarization. As one of the hybrid spiking neuron models, Izhikevich neuron model can reproduce major spike patterns observed in the cerebral cortex only by tuning a few parameters and also exhibit chaotic states in specific conditions. However, there are a few studies concerning the chaotic states over a large range of parameters due to the difficulty of dealing with the state dependent jump on the resetting process in this model. In this study, we examine the dependence of the system behavior on the resetting parameters by using Lyapunov exponent with saltation matrix and Poincaré section methods, and classify the routes to chaos.


2018 ◽  
Vol 30 (3) ◽  
pp. 670-707 ◽  
Author(s):  
Dorian Florescu ◽  
Daniel Coca

Inferring mathematical models of sensory processing systems directly from input-output observations, while making the fewest assumptions about the model equations and the types of measurements available, is still a major issue in computational neuroscience. This letter introduces two new approaches for identifying sensory circuit models consisting of linear and nonlinear filters in series with spiking neuron models, based only on the sampled analog input to the filter and the recorded spike train output of the spiking neuron. For an ideal integrate-and-fire neuron model, the first algorithm can identify the spiking neuron parameters as well as the structure and parameters of an arbitrary nonlinear filter connected to it. The second algorithm can identify the parameters of the more general leaky integrate-and-fire spiking neuron model, as well as the parameters of an arbitrary linear filter connected to it. Numerical studies involving simulated and real experimental recordings are used to demonstrate the applicability and evaluate the performance of the proposed algorithms.


Sign in / Sign up

Export Citation Format

Share Document