scholarly journals A Non-spiking Neuron Model With Dynamic Leak to Avoid Instability in Recurrent Networks

2021 ◽  
Vol 15 ◽  
Author(s):  
Udaya B. Rongala ◽  
Jonas M. D. Enander ◽  
Matthias Kohler ◽  
Gerald E. Loeb ◽  
Henrik Jörntell

Recurrent circuitry components are distributed widely within the brain, including both excitatory and inhibitory synaptic connections. Recurrent neuronal networks have potential stability problems, perhaps a predisposition to epilepsy. More generally, instability risks making internal representations of information unreliable. To assess the inherent stability properties of such recurrent networks, we tested a linear summation, non-spiking neuron model with and without a “dynamic leak”, corresponding to the low-pass filtering of synaptic input current by the RC circuit of the biological membrane. We first show that the output of this neuron model, in either of its two forms, follows its input at a higher fidelity than a wide range of spiking neuron models across a range of input frequencies. Then we constructed fully connected recurrent networks with equal numbers of excitatory and inhibitory neurons and randomly distributed weights across all synapses. When the networks were driven by pseudorandom sensory inputs with varying frequency, the recurrent network activity tended to induce high frequency self-amplifying components, sometimes evident as distinct transients, which were not present in the input data. The addition of a dynamic leak based on known membrane properties consistently removed such spurious high frequency noise across all networks. Furthermore, we found that the neuron model with dynamic leak imparts a network stability that seamlessly scales with the size of the network, conduction delays, the input density of the sensory signal and a wide range of synaptic weight distributions. Our findings suggest that neuronal dynamic leak serves the beneficial function of protecting recurrent neuronal circuitry from the self-induction of spurious high frequency signals, thereby permitting the brain to utilize this architectural circuitry component regardless of network size or recurrency.

2020 ◽  
Author(s):  
Udaya B. Rongala ◽  
Jonas M.D. Enander ◽  
Matthias Kohler ◽  
Gerald E. Loeb ◽  
Henrik Jörntell

AbstractRecurrent circuitry components are distributed widely within the brain, including both excitatory and inhibitory synaptic connections. Recurrent neuronal networks have potential stability problems, perhaps a predisposition to epilepsy. More generally, instability risks making internal representations of information unreliable. To assess the inherent stability properties of such recurrent networks, we tested a linear summation, non-spiking neuron model with and without a ‘dynamic leak’, corresponding to the low-pass filtering of synaptic input current by the RC circuit of the biological membrane. We first show that the output of this neuron model, in either of its two forms, follows its input at a higher fidelity than a wide range of spiking neuron models across a range of input frequencies. Then we constructed fully connected recurrent networks with equal numbers of excitatory and inhibitory neurons and randomly distributed weights across all synapses. When the networks were driven by pseudorandom sensory inputs with varying frequency, the recurrent network activity tended to induce high frequency self-amplifying components, sometimes evident as distinct transients, which were not present in the input data. The addition of a dynamic leak based on known membrane properties consistently removed such spurious high frequency noise across all networks. Furthermore, we found that the neuron model with dynamic leak imparts a network stability that seamlessly scales with the size of the network, conduction delays, the input density of the sensory signal and a wide range of synaptic weight distributions. Our findings suggest that neuronal dynamic leak serves the beneficial function of protecting recurrent neuronal circuitry from the self-induction of spurious high frequency signals, thereby permitting the brain to utilize this architectural circuitry component regardless of network size or recurrency.Author SummaryIt is known that neurons of the brain are extensively interconnected, which can result in many recurrent loops within its neuronal network. Such loops are prone to instability. Here we wanted to explore the potential noise and instability that could result in recurrently connected neuronal networks across a range of conditions. To facilitate such simulations, we developed a non-spiking neuron model that captures the main characteristics of conductance-based neuron models of Hodgkin-Huxley type, but is more computationally efficient. We found that a so-called dynamic leak, which is a natural consequence of the way the membrane of the neuron is constructed and how the neuron integrates synaptic inputs, provided protection against spurious, high frequency noise that tended to arise in our recurrent networks of varying size. We propose that this linear summation model provides a stable and useful tool for exploring the computational behavior of recurrent neural networks.


2016 ◽  
Vol 9 (1) ◽  
pp. 117-134 ◽  
Author(s):  
Peter Duggins ◽  
Terrence C. Stewart ◽  
Xuan Choo ◽  
Chris Eliasmith

2020 ◽  
Vol 32 (7) ◽  
pp. 1408-1429
Author(s):  
Jakub Fil ◽  
Dominique Chu

The multispike tempotron (MST) is a powersul, single spiking neuron model that can solve complex supervised classification tasks. It is also internally complex, computationally expensive to evaluate, and unsuitable for neuromorphic hardware. Here we aim to understand whether it is possible to simplify the MST model while retaining its ability to learn and process information. To this end, we introduce a family of generalized neuron models (GNMs) that are a special case of the spike response model and much simpler and cheaper to simulate than the MST. We find that over a wide range of parameters, the GNM can learn at least as well as the MST does. We identify the temporal autocorrelation of the membrane potential as the most important ingredient of the GNM that enables it to classify multiple spatiotemporal patterns. We also interpret the GNM as a chemical system, thus conceptually bridging computation by neural networks with molecular information processing. We conclude the letter by proposing alternative training approaches for the GNM, including error trace learning and error backpropagation.


Sign in / Sign up

Export Citation Format

Share Document