scholarly journals Minimal Spiking Neuron for Solving Multilabel Classification Tasks

2020 ◽  
Vol 32 (7) ◽  
pp. 1408-1429
Author(s):  
Jakub Fil ◽  
Dominique Chu

The multispike tempotron (MST) is a powersul, single spiking neuron model that can solve complex supervised classification tasks. It is also internally complex, computationally expensive to evaluate, and unsuitable for neuromorphic hardware. Here we aim to understand whether it is possible to simplify the MST model while retaining its ability to learn and process information. To this end, we introduce a family of generalized neuron models (GNMs) that are a special case of the spike response model and much simpler and cheaper to simulate than the MST. We find that over a wide range of parameters, the GNM can learn at least as well as the MST does. We identify the temporal autocorrelation of the membrane potential as the most important ingredient of the GNM that enables it to classify multiple spatiotemporal patterns. We also interpret the GNM as a chemical system, thus conceptually bridging computation by neural networks with molecular information processing. We conclude the letter by proposing alternative training approaches for the GNM, including error trace learning and error backpropagation.

2020 ◽  
Author(s):  
Udaya B. Rongala ◽  
Jonas M.D. Enander ◽  
Matthias Kohler ◽  
Gerald E. Loeb ◽  
Henrik Jörntell

AbstractRecurrent circuitry components are distributed widely within the brain, including both excitatory and inhibitory synaptic connections. Recurrent neuronal networks have potential stability problems, perhaps a predisposition to epilepsy. More generally, instability risks making internal representations of information unreliable. To assess the inherent stability properties of such recurrent networks, we tested a linear summation, non-spiking neuron model with and without a ‘dynamic leak’, corresponding to the low-pass filtering of synaptic input current by the RC circuit of the biological membrane. We first show that the output of this neuron model, in either of its two forms, follows its input at a higher fidelity than a wide range of spiking neuron models across a range of input frequencies. Then we constructed fully connected recurrent networks with equal numbers of excitatory and inhibitory neurons and randomly distributed weights across all synapses. When the networks were driven by pseudorandom sensory inputs with varying frequency, the recurrent network activity tended to induce high frequency self-amplifying components, sometimes evident as distinct transients, which were not present in the input data. The addition of a dynamic leak based on known membrane properties consistently removed such spurious high frequency noise across all networks. Furthermore, we found that the neuron model with dynamic leak imparts a network stability that seamlessly scales with the size of the network, conduction delays, the input density of the sensory signal and a wide range of synaptic weight distributions. Our findings suggest that neuronal dynamic leak serves the beneficial function of protecting recurrent neuronal circuitry from the self-induction of spurious high frequency signals, thereby permitting the brain to utilize this architectural circuitry component regardless of network size or recurrency.Author SummaryIt is known that neurons of the brain are extensively interconnected, which can result in many recurrent loops within its neuronal network. Such loops are prone to instability. Here we wanted to explore the potential noise and instability that could result in recurrently connected neuronal networks across a range of conditions. To facilitate such simulations, we developed a non-spiking neuron model that captures the main characteristics of conductance-based neuron models of Hodgkin-Huxley type, but is more computationally efficient. We found that a so-called dynamic leak, which is a natural consequence of the way the membrane of the neuron is constructed and how the neuron integrates synaptic inputs, provided protection against spurious, high frequency noise that tended to arise in our recurrent networks of varying size. We propose that this linear summation model provides a stable and useful tool for exploring the computational behavior of recurrent neural networks.


2018 ◽  
Vol 30 (3) ◽  
pp. 670-707 ◽  
Author(s):  
Dorian Florescu ◽  
Daniel Coca

Inferring mathematical models of sensory processing systems directly from input-output observations, while making the fewest assumptions about the model equations and the types of measurements available, is still a major issue in computational neuroscience. This letter introduces two new approaches for identifying sensory circuit models consisting of linear and nonlinear filters in series with spiking neuron models, based only on the sampled analog input to the filter and the recorded spike train output of the spiking neuron. For an ideal integrate-and-fire neuron model, the first algorithm can identify the spiking neuron parameters as well as the structure and parameters of an arbitrary nonlinear filter connected to it. The second algorithm can identify the parameters of the more general leaky integrate-and-fire spiking neuron model, as well as the parameters of an arbitrary linear filter connected to it. Numerical studies involving simulated and real experimental recordings are used to demonstrate the applicability and evaluate the performance of the proposed algorithms.


2021 ◽  
Vol 15 ◽  
Author(s):  
Udaya B. Rongala ◽  
Jonas M. D. Enander ◽  
Matthias Kohler ◽  
Gerald E. Loeb ◽  
Henrik Jörntell

Recurrent circuitry components are distributed widely within the brain, including both excitatory and inhibitory synaptic connections. Recurrent neuronal networks have potential stability problems, perhaps a predisposition to epilepsy. More generally, instability risks making internal representations of information unreliable. To assess the inherent stability properties of such recurrent networks, we tested a linear summation, non-spiking neuron model with and without a “dynamic leak”, corresponding to the low-pass filtering of synaptic input current by the RC circuit of the biological membrane. We first show that the output of this neuron model, in either of its two forms, follows its input at a higher fidelity than a wide range of spiking neuron models across a range of input frequencies. Then we constructed fully connected recurrent networks with equal numbers of excitatory and inhibitory neurons and randomly distributed weights across all synapses. When the networks were driven by pseudorandom sensory inputs with varying frequency, the recurrent network activity tended to induce high frequency self-amplifying components, sometimes evident as distinct transients, which were not present in the input data. The addition of a dynamic leak based on known membrane properties consistently removed such spurious high frequency noise across all networks. Furthermore, we found that the neuron model with dynamic leak imparts a network stability that seamlessly scales with the size of the network, conduction delays, the input density of the sensory signal and a wide range of synaptic weight distributions. Our findings suggest that neuronal dynamic leak serves the beneficial function of protecting recurrent neuronal circuitry from the self-induction of spurious high frequency signals, thereby permitting the brain to utilize this architectural circuitry component regardless of network size or recurrency.


2008 ◽  
Vol 20 (1) ◽  
pp. 65-90 ◽  
Author(s):  
Jeffrey J. Lovelace ◽  
Krzysztof J. Cios

This letter introduces a biologically inspired very simple spiking neuron model. The model retains only crucial aspects of biological neurons: a network of time-delayed weighted connections to other neurons, a threshold-based generation of action potentials, action potential frequency proportional to stimulus intensity, and interneuron communication that occurs with time-varying potentials that last longer than the associated action potentials. The key difference between this model and existing spiking neuron models is its great simplicity: it is basically a collection of linear and discontinuous functions with no differential equations to solve. The model's ability to operate in a complex network was tested by using it as a basis of a network implementing a hypothetical echolocation system. The system consists of an emitter and two receivers. The outputs of the receivers are connected to a network of spiking neurons (using the proposed model) to form a detection grid that acts as a map of object locations in space. The network uses differences in the arrival times of the signals to determine the azimuthal angle of the source and time of flight to calculate the distance. The activation patterns observed indicate that for a network of spiking neurons, which uses only time delays to determine source locations, the spatial discrimination varies with the number and relative spacing of objects. These results are similar to those observed in animals that use echolocation.


2022 ◽  
Author(s):  
Anguo Zhang ◽  
Ying Han ◽  
Jing Hu ◽  
Yuzhen Niu ◽  
Yueming Gao ◽  
...  

We propose two simple and effective spiking neuron models to improve the response time of the conventional spiking neural network. The proposed neuron models adaptively tune the presynaptic input current depending on the input received from its presynapses and subsequent neuron firing events. We analyze and derive the firing activity homeostatic convergence of the proposed models. We experimentally verify and compare the models on MNIST handwritten digits and FashionMNIST classification tasks. We show that the proposed neuron models significantly increase the response speed to the input signal.


Author(s):  
Wulfram Gerstner ◽  
Werner M. Kistler

Sign in / Sign up

Export Citation Format

Share Document