spiking neuron model
Recently Published Documents


TOTAL DOCUMENTS

128
(FIVE YEARS 30)

H-INDEX

14
(FIVE YEARS 1)

2021 ◽  
Vol 15 ◽  
Author(s):  
Emiliano Trimarco ◽  
Pierandrea Mirino ◽  
Daniele Caligiore

Empirical evidence suggests that children with autism spectrum disorder (ASD) show abnormal behavior during delay eyeblink conditioning. They show a higher conditioned response learning rate and earlier peak latency of the conditioned response signal. The neuronal mechanisms underlying this autistic behavioral phenotype are still unclear. Here, we use a physiologically constrained spiking neuron model of the cerebellar-cortical system to investigate which features are critical to explaining atypical learning in ASD. Significantly, the computer simulations run with the model suggest that the higher conditioned responses learning rate mainly depends on the reduced number of Purkinje cells. In contrast, the earlier peak latency mainly depends on the hyper-connections of the cerebellum with sensory and motor cortex. Notably, the model has been validated by reproducing the behavioral data collected from studies with real children. Overall, this article is a starting point to understanding the link between the behavioral and neurobiological basis in ASD learning. At the end of the paper, we discuss how this knowledge could be critical for devising new treatments.


2021 ◽  
Author(s):  
Giuseppe de Alteriis ◽  
Enrico Cataldo ◽  
Alberto Mazzoni ◽  
Calogero Maria Oddo

The Izhikevich artificial spiking neuron model is among the most employed models in neuromorphic engineering and computational neuroscience, due to the affordable computational effort to discretize it and its biological plausibility. It has been adopted also for applications with limited computational resources in embedded systems. It is important therefore to realize a compromise between error and computational expense to solve numerically the model's equations. Here we investigate the effects of discretization and we study the solver that realizes the best compromise between accuracy and computational cost, given an available amount of Floating Point Operations per Second (FLOPS). We considered three frequently used fixed step Ordinary Differential Equations (ODE) solvers in computational neuroscience: Euler method, the Runge-Kutta 2 method and the Runge-Kutta 4 method. To quantify the error produced by the solvers, we used the Victor Purpura spike train Distance from an ideal solution of the ODE. Counterintuitively, we found that simple methods such as Euler and Runge Kutta 2 can outperform more complex ones (i.e. Runge Kutta 4) in the numerical solution of the Izhikevich model if the same FLOPS are allocated in the comparison. Moreover, we quantified the neuron rest time (with input under threshold resulting in no output spikes) necessary for the numerical solution to converge to the ideal solution and therefore to cancel the error accumulated during the spike train; in this analysis we found that the required rest time is independent from the firing rate and the spike train duration. Our results can generalize in a straightforward manner to other spiking neuron models and provide a systematic analysis of fixed step neural ODE solvers towards a trade-off between accuracy in the discretization and computational cost.


2021 ◽  
pp. 187-194
Author(s):  
Anton Korsakov ◽  
Aleksandr Bakhshiev ◽  
Lyubov Astapova ◽  
Lev Stankevich

2021 ◽  
Vol 2094 (3) ◽  
pp. 032032
Author(s):  
L A Astapova ◽  
A M Korsakov ◽  
A V Bakhshiev ◽  
E A Eremenko ◽  
E Yu Smirnova

Abstract One of the directions of development within the framework of the neuromorphic approach is the development of anatomically similar models of brain networks, taking into account the structurally complex structure of neurons and the adaptation of connections between them, as well as the development of learning algorithms for such models. In this work, we use the previously presented compartmental spike model of a neuron, which describes the structure (dendritic tree, soma, synapses) and behaviour (temporal and spatial signal summation, generation of action potential, stimulation and suppression of electrical activity) of a biological neuron. An algorithm for the structural organization of neuron models into a spike neural network is proposed for recognizing an arbitrary impulse pattern by introducing inhibitory synapses between trained neuron models. The dynamically adapting neuron models used are trained according to a previously proposed algorithm that automatically selects parameters such as soma size, dendrite length, and the number of synapses on each of the dendrites in order to induce a temporal response at the output depending on the input pattern encoded using a time window and temporal delays in the vector of single spikes arriving at a separate dendrite of a neuron. The developed algorithms are evaluated on the Iris dataset classification problem with four training examples from each class. As a result of the classification, separate disjoint clusters are formed, which demonstrates the applicability of the proposed spike neural network with a dynamically changing structure of elements in the problem of pattern recognition and classification.


2021 ◽  
Vol 15 ◽  
Author(s):  
Xiaoyan Fang ◽  
Shukai Duan ◽  
Lidan Wang

The Hodgkin-Huxley (HH) spiking neuron model reproduces the dynamic characteristics of the neuron by mimicking the action potential, ionic channels, and spiking behaviors. The memristor is a nonlinear device with variable resistance. In this paper, the memristor is introduced to the HH spiking model, and the memristive Hodgkin-Huxley spiking neuron model (MHH) is presented. We experimentally compare the HH spiking model and the MHH spiking model by applying different stimuli. First, the individual current pulse is injected into the HH and MHH spiking models. The comparison between action potentials, current densities, and conductances is carried out. Second, the reverse single pulse stimulus and a series of pulse stimuli are applied to the two models. The effects of current density and action time on the production of the action potential are analyzed. Finally, the sinusoidal current stimulus acts on the two models. The various spiking behaviors are realized by adjusting the frequency of the sinusoidal stimulus. We experimentally demonstrate that the MHH spiking model generates more action potential than the HH spiking model and takes a short time to change the memductance. The reverse stimulus cannot activate the action potential in both models. The MHH spiking model performs smoother waveforms and a faster speed to return to the resting potential. The larger the external stimulus, the faster action potential generated, and the more noticeable change in conductances. Meanwhile, the MHH spiking model shows the various spiking patterns of neurons.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Gianluca Susi ◽  
Pilar Garcés ◽  
Emanuele Paracone ◽  
Alessandro Cristini ◽  
Mario Salerno ◽  
...  

AbstractNeural modelling tools are increasingly employed to describe, explain, and predict the human brain’s behavior. Among them, spiking neural networks (SNNs) make possible the simulation of neural activity at the level of single neurons, but their use is often threatened by the resources needed in terms of processing capabilities and memory. Emerging applications where a low energy burden is required (e.g. implanted neuroprostheses) motivate the exploration of new strategies able to capture the relevant principles of neuronal dynamics in reduced and efficient models. The recent Leaky Integrate-and-Fire with Latency (LIFL) spiking neuron model shows some realistic neuronal features and efficiency at the same time, a combination of characteristics that may result appealing for SNN-based brain modelling. In this paper we introduce FNS, the first LIFL-based SNN framework, which combines spiking/synaptic modelling with the event-driven approach, allowing us to define heterogeneous neuron groups and multi-scale connectivity, with delayed connections and plastic synapses. FNS allows multi-thread, precise simulations, integrating a novel parallelization strategy and a mechanism of periodic dumping. We evaluate the performance of FNS in terms of simulation time and used memory, and compare it with those obtained with neuronal models having a similar neurocomputational profile, implemented in NEST, showing that FNS performs better in both scenarios. FNS can be advantageously used to explore the interaction within and between populations of spiking neurons, even for long time-scales and with a limited hardware configuration.


2021 ◽  
Vol 15 ◽  
Author(s):  
Udaya B. Rongala ◽  
Jonas M. D. Enander ◽  
Matthias Kohler ◽  
Gerald E. Loeb ◽  
Henrik Jörntell

Recurrent circuitry components are distributed widely within the brain, including both excitatory and inhibitory synaptic connections. Recurrent neuronal networks have potential stability problems, perhaps a predisposition to epilepsy. More generally, instability risks making internal representations of information unreliable. To assess the inherent stability properties of such recurrent networks, we tested a linear summation, non-spiking neuron model with and without a “dynamic leak”, corresponding to the low-pass filtering of synaptic input current by the RC circuit of the biological membrane. We first show that the output of this neuron model, in either of its two forms, follows its input at a higher fidelity than a wide range of spiking neuron models across a range of input frequencies. Then we constructed fully connected recurrent networks with equal numbers of excitatory and inhibitory neurons and randomly distributed weights across all synapses. When the networks were driven by pseudorandom sensory inputs with varying frequency, the recurrent network activity tended to induce high frequency self-amplifying components, sometimes evident as distinct transients, which were not present in the input data. The addition of a dynamic leak based on known membrane properties consistently removed such spurious high frequency noise across all networks. Furthermore, we found that the neuron model with dynamic leak imparts a network stability that seamlessly scales with the size of the network, conduction delays, the input density of the sensory signal and a wide range of synaptic weight distributions. Our findings suggest that neuronal dynamic leak serves the beneficial function of protecting recurrent neuronal circuitry from the self-induction of spurious high frequency signals, thereby permitting the brain to utilize this architectural circuitry component regardless of network size or recurrency.


2021 ◽  
Vol 15 ◽  
Author(s):  
Beck Strohmer ◽  
Rasmus Karnøe Stagsted ◽  
Poramate Manoonpong ◽  
Leon Bonde Larsen

Researchers working with neural networks have historically focused on either non-spiking neurons tractable for running on computers or more biologically plausible spiking neurons typically requiring special hardware. However, in nature homogeneous networks of neurons do not exist. Instead, spiking and non-spiking neurons cooperate, each bringing a different set of advantages. A well-researched biological example of such a mixed network is a sensorimotor pathway, responsible for mapping sensory inputs to behavioral changes. This type of pathway is also well-researched in robotics where it is applied to achieve closed-loop operation of legged robots by adapting amplitude, frequency, and phase of the motor output. In this paper we investigate how spiking and non-spiking neurons can be combined to create a sensorimotor neuron pathway capable of shaping network output based on analog input. We propose sub-threshold operation of an existing spiking neuron model to create a non-spiking neuron able to interpret analog information and communicate with spiking neurons. The validity of this methodology is confirmed through a simulation of a closed-loop amplitude regulating network inspired by the internal feedback loops found in insects for posturing. Additionally, we show that non-spiking neurons can effectively manipulate post-synaptic spiking neurons in an event-based architecture. The ability to work with mixed networks provides an opportunity for researchers to investigate new network architectures for adaptive controllers, potentially improving locomotion strategies of legged robots.


Sign in / Sign up

Export Citation Format

Share Document