scholarly journals Identification of Linear and Nonlinear Sensory Processing Circuits from Spiking Neuron Data

2018 ◽  
Vol 30 (3) ◽  
pp. 670-707 ◽  
Author(s):  
Dorian Florescu ◽  
Daniel Coca

Inferring mathematical models of sensory processing systems directly from input-output observations, while making the fewest assumptions about the model equations and the types of measurements available, is still a major issue in computational neuroscience. This letter introduces two new approaches for identifying sensory circuit models consisting of linear and nonlinear filters in series with spiking neuron models, based only on the sampled analog input to the filter and the recorded spike train output of the spiking neuron. For an ideal integrate-and-fire neuron model, the first algorithm can identify the spiking neuron parameters as well as the structure and parameters of an arbitrary nonlinear filter connected to it. The second algorithm can identify the parameters of the more general leaky integrate-and-fire spiking neuron model, as well as the parameters of an arbitrary linear filter connected to it. Numerical studies involving simulated and real experimental recordings are used to demonstrate the applicability and evaluate the performance of the proposed algorithms.

2019 ◽  
Vol 16 (9) ◽  
pp. 3897-3905
Author(s):  
Pankaj Kumar Kandpal ◽  
Ashish Mehta

In the present article, two-dimensional “Spiking Neuron Model” is being compared with the fourdimensional “Integrate-and-fire Neuron Model” (IFN) using error correction back propagation learning algorithm (error correction learning). A comparative study has been done on the basis of several parameters like iteration, execution time, miss-classification rate, number of iterations etc. The authors choose the five-bit parity problem and Iris classification problem for the present study. Results of simulation express that both the models are capable to perform classification task. But single spiking neuron model having two-dimensional phenomena is less complex than Integrate-fire-neuron, produces better results. On the contrary, the classification performance of single ingrate-and-fire neuron model is not very poor but due to complex four-dimensional architecture, miss-classification rate is higher than single spiking neuron model, it means Integrate-and-fire neuron model is less capable than spiking neuron model to solve classification problems.


2009 ◽  
Vol 21 (2) ◽  
pp. 353-359 ◽  
Author(s):  
Hans E. Plesser ◽  
Markus Diesmann

Lovelace and Cios ( 2008 ) recently proposed a very simple spiking neuron (VSSN) model for simulations of large neuronal networks as an efficient replacement for the integrate-and-fire neuron model. We argue that the VSSN model falls behind key advances in neuronal network modeling over the past 20 years, in particular, techniques that permit simulators to compute the state of the neuron without repeated summation over the history of input spikes and to integrate the subthreshold dynamics exactly. State-of-the-art solvers for networks of integrate-and-fire model neurons are substantially more efficient than the VSSN simulator and allow routine simulations of networks of some 105 neurons and 109 connections on moderate computer clusters.


2013 ◽  
Vol 23 (10) ◽  
pp. 1350171 ◽  
Author(s):  
LIEJUNE SHIAU ◽  
CARLO R. LAING

Although variability is a ubiquitous characteristic of the nervous system, under appropriate conditions neurons can generate precisely timed action potentials. Thus considerable attention has been given to the study of a neuron's output in relation to its stimulus. In this study, we consider an increasingly popular spiking neuron model, the adaptive exponential integrate-and-fire neuron. For analytical tractability, we consider its piecewise-linear variant in order to understand the responses of such neurons to periodic stimuli. There exist regions in parameter space in which the neuron is mode locked to the periodic stimulus, and instabilities of the mode locked states lead to an Arnol'd tongue structure in parameter space. We analyze mode locked solutions and examine the bifurcations that define the boundaries of the tongue structures. The theoretical analysis is in excellent agreement with numerical simulations, and this study can be used to further understand the functional features related to responses of such a model neuron to biologically realistic inputs.


2008 ◽  
Vol 20 (1) ◽  
pp. 65-90 ◽  
Author(s):  
Jeffrey J. Lovelace ◽  
Krzysztof J. Cios

This letter introduces a biologically inspired very simple spiking neuron model. The model retains only crucial aspects of biological neurons: a network of time-delayed weighted connections to other neurons, a threshold-based generation of action potentials, action potential frequency proportional to stimulus intensity, and interneuron communication that occurs with time-varying potentials that last longer than the associated action potentials. The key difference between this model and existing spiking neuron models is its great simplicity: it is basically a collection of linear and discontinuous functions with no differential equations to solve. The model's ability to operate in a complex network was tested by using it as a basis of a network implementing a hypothetical echolocation system. The system consists of an emitter and two receivers. The outputs of the receivers are connected to a network of spiking neurons (using the proposed model) to form a detection grid that acts as a map of object locations in space. The network uses differences in the arrival times of the signals to determine the azimuthal angle of the source and time of flight to calculate the distance. The activation patterns observed indicate that for a network of spiking neurons, which uses only time delays to determine source locations, the spatial discrimination varies with the number and relative spacing of objects. These results are similar to those observed in animals that use echolocation.


2020 ◽  
Author(s):  
Udaya B. Rongala ◽  
Jonas M.D. Enander ◽  
Matthias Kohler ◽  
Gerald E. Loeb ◽  
Henrik Jörntell

AbstractRecurrent circuitry components are distributed widely within the brain, including both excitatory and inhibitory synaptic connections. Recurrent neuronal networks have potential stability problems, perhaps a predisposition to epilepsy. More generally, instability risks making internal representations of information unreliable. To assess the inherent stability properties of such recurrent networks, we tested a linear summation, non-spiking neuron model with and without a ‘dynamic leak’, corresponding to the low-pass filtering of synaptic input current by the RC circuit of the biological membrane. We first show that the output of this neuron model, in either of its two forms, follows its input at a higher fidelity than a wide range of spiking neuron models across a range of input frequencies. Then we constructed fully connected recurrent networks with equal numbers of excitatory and inhibitory neurons and randomly distributed weights across all synapses. When the networks were driven by pseudorandom sensory inputs with varying frequency, the recurrent network activity tended to induce high frequency self-amplifying components, sometimes evident as distinct transients, which were not present in the input data. The addition of a dynamic leak based on known membrane properties consistently removed such spurious high frequency noise across all networks. Furthermore, we found that the neuron model with dynamic leak imparts a network stability that seamlessly scales with the size of the network, conduction delays, the input density of the sensory signal and a wide range of synaptic weight distributions. Our findings suggest that neuronal dynamic leak serves the beneficial function of protecting recurrent neuronal circuitry from the self-induction of spurious high frequency signals, thereby permitting the brain to utilize this architectural circuitry component regardless of network size or recurrency.Author SummaryIt is known that neurons of the brain are extensively interconnected, which can result in many recurrent loops within its neuronal network. Such loops are prone to instability. Here we wanted to explore the potential noise and instability that could result in recurrently connected neuronal networks across a range of conditions. To facilitate such simulations, we developed a non-spiking neuron model that captures the main characteristics of conductance-based neuron models of Hodgkin-Huxley type, but is more computationally efficient. We found that a so-called dynamic leak, which is a natural consequence of the way the membrane of the neuron is constructed and how the neuron integrates synaptic inputs, provided protection against spurious, high frequency noise that tended to arise in our recurrent networks of varying size. We propose that this linear summation model provides a stable and useful tool for exploring the computational behavior of recurrent neural networks.


2016 ◽  
Vol 9 (1) ◽  
pp. 117-134 ◽  
Author(s):  
Peter Duggins ◽  
Terrence C. Stewart ◽  
Xuan Choo ◽  
Chris Eliasmith

Sign in / Sign up

Export Citation Format

Share Document