scholarly journals HARDWARE IMPLEMENTATION DESIGN OF A SPIKING NEURON

2021 ◽  
Vol 1 (132) ◽  
pp. 116-123
Author(s):  
Alexey Gnilenko

The hardware implementation of an artificial neuron is the key problem of the design of neuromorphic chips which are new promising architectural solutions for massively parallel computing. In this paper an analog neuron circuit design is presented to be used as a building element of spiking neuron networks. The design of the neuron is performed at the transistor level based on Leaky Integrate-and-Fire neuron implementation model. The neuron is simulated using EDA tool to verify the design. Signal waveforms at key nodes of the neuron are obtained and neuron functionality is demonstrated.

2019 ◽  
Vol 16 (9) ◽  
pp. 3897-3905
Author(s):  
Pankaj Kumar Kandpal ◽  
Ashish Mehta

In the present article, two-dimensional “Spiking Neuron Model” is being compared with the fourdimensional “Integrate-and-fire Neuron Model” (IFN) using error correction back propagation learning algorithm (error correction learning). A comparative study has been done on the basis of several parameters like iteration, execution time, miss-classification rate, number of iterations etc. The authors choose the five-bit parity problem and Iris classification problem for the present study. Results of simulation express that both the models are capable to perform classification task. But single spiking neuron model having two-dimensional phenomena is less complex than Integrate-fire-neuron, produces better results. On the contrary, the classification performance of single ingrate-and-fire neuron model is not very poor but due to complex four-dimensional architecture, miss-classification rate is higher than single spiking neuron model, it means Integrate-and-fire neuron model is less capable than spiking neuron model to solve classification problems.


2009 ◽  
Vol 21 (2) ◽  
pp. 353-359 ◽  
Author(s):  
Hans E. Plesser ◽  
Markus Diesmann

Lovelace and Cios ( 2008 ) recently proposed a very simple spiking neuron (VSSN) model for simulations of large neuronal networks as an efficient replacement for the integrate-and-fire neuron model. We argue that the VSSN model falls behind key advances in neuronal network modeling over the past 20 years, in particular, techniques that permit simulators to compute the state of the neuron without repeated summation over the history of input spikes and to integrate the subthreshold dynamics exactly. State-of-the-art solvers for networks of integrate-and-fire model neurons are substantially more efficient than the VSSN simulator and allow routine simulations of networks of some 105 neurons and 109 connections on moderate computer clusters.


2013 ◽  
Vol 23 (10) ◽  
pp. 1350171 ◽  
Author(s):  
LIEJUNE SHIAU ◽  
CARLO R. LAING

Although variability is a ubiquitous characteristic of the nervous system, under appropriate conditions neurons can generate precisely timed action potentials. Thus considerable attention has been given to the study of a neuron's output in relation to its stimulus. In this study, we consider an increasingly popular spiking neuron model, the adaptive exponential integrate-and-fire neuron. For analytical tractability, we consider its piecewise-linear variant in order to understand the responses of such neurons to periodic stimuli. There exist regions in parameter space in which the neuron is mode locked to the periodic stimulus, and instabilities of the mode locked states lead to an Arnol'd tongue structure in parameter space. We analyze mode locked solutions and examine the bifurcations that define the boundaries of the tongue structures. The theoretical analysis is in excellent agreement with numerical simulations, and this study can be used to further understand the functional features related to responses of such a model neuron to biologically realistic inputs.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 539
Author(s):  
Romain D. Cazé

Multiple studies have shown how dendrites enable some neurons to perform linearly non-separable computations. These works focus on cells with an extended dendritic arbor where voltage can vary independently, turning dendritic branches into local non-linear subunits. However, these studies leave a large fraction of the nervous system unexplored. Many neurons, e.g. granule cells, have modest dendritic trees and are electrically compact. It is impossible to decompose them into multiple independent subunits. Here, we upgraded the integrate and fire neuron to account for saturating dendrites. This artificial neuron has a unique membrane voltage and can be seen as a single layer. We present a class of linearly non-separable computations and how our neuron can perform them. We thus demonstrate that even a single layer neuron with dendrites has more computational capacity than without. Because any neuron has one or more layer, and all dendrites do saturate, we show that any dendrited neuron can implement linearly non-separable computations.


2007 ◽  
Vol 19 (8) ◽  
pp. 2124-2148 ◽  
Author(s):  
Jianfu Ma ◽  
Jianhong Wu

We consider the effect of the effective timing of a delayed feedback on the excitatory neuron in a recurrent inhibitory loop, when biological realities of firing and absolute refractory period are incorporated into a phenomenological spiking linear or quadratic integrate-and-fire neuron model. We show that such models are capable of generating a large number of asymptotically stable periodic solutions with predictable patterns of oscillations. We observe that the number of fixed points of the so-called phase resetting map coincides with the number of distinct periods of all stable periodic solutions rather than the number of stable patterns. We demonstrate how configurational information corresponding to these distinct periods can be explored to calculate and predict the number of stable patterns.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 539
Author(s):  
Romain D. Cazé

Multiple studies have shown how dendrites enable some neurons to perform linearly non-separable computations. These works focus on cells with an extended dendritic arbor where voltage can vary independently, turning dendritic branches into local non-linear subunits. However, these studies leave a large fraction of the nervous system unexplored. Many neurons, e.g. granule cells, have modest dendritic trees and are electrically compact. It is impossible to decompose them into multiple independent subunits. Here, we upgraded the integrate and fire neuron to account for saturating dendrites. This artificial neuron has a unique membrane voltage and can be seen as a single layer. We present a class of linearly non-separable computations and how our neuron can perform them. We thus demonstrate that even a single layer neuron with dendrites has more computational capacity than without. Because any neuron has one or more layer, and all dendrites do saturate, we show that any dendrited neuron can implement linearly non-separable computations.


Sign in / Sign up

Export Citation Format

Share Document