Reduced Single Neuron Models

Author(s):  
Fabrizio Gabbiani ◽  
Steven James Cox
Keyword(s):  
2010 ◽  
pp. 399-422 ◽  
Author(s):  
Frances Skinner ◽  
Fernanda Saraga
Keyword(s):  

2019 ◽  
Author(s):  
Johannes Leugering ◽  
Pascal Nieters ◽  
Gordon Pipa

AbstractMany behavioural tasks require an animal to integrate information on a slow timescale that can exceed hundreds of milliseconds. How this is realized by neurons with membrane time constants on the order of tens of milliseconds or less remains an open question. We show, how the interaction of two kinds of events within the dendritic tree, excitatory postsynaptic potentials and locally generated dendritic plateau potentials, can allow a single neuron to detect specific sequences of spiking input on such slow timescales. Our conceptual model reveals, how the morphology of a neuron’s dendritic tree determines its computational function, which can range from a simple logic gate to the gradual integration of evidence to the detection of complex spatio-temporal spike-sequences on long timescales. As an example, we illustrate in a simulated navigation task how this mechanism can even allow individual neurons to reliably detect specific movement trajectories with high tolerance for timing variability. We relate our results to conclusive findings in neurobiology and discuss implications for both experimental and theoretical neuroscience.Author SummaryThe recognition of patterns that span multiple timescales is a critical function of the brain. This is a conceptual challenge for all neuron models that rely on the passive integration of synaptic inputs and are therefore limited to the rigid millisecond timescale of post-synaptic currents. However, detailed biological measurements recently revealed that single neurons actively generate localized plateau potentials within the dendritic tree that can last hundreds of milliseconds. Here, we investigate single-neuron computation in a model that adheres to these findings but is intentionally simple. Our analysis reveals how plateaus act as memory traces, and their interaction as defined by the dendritic morphology of a neuron gives rise to complex non-linear computation. We demonstrate how this mechanism enables individual neurons to solve difficult, behaviorally relevant tasks that are commonly studied on the network-level, such as the detection of variable input sequences or the integration of evidence on long timescales. We also characterize computation in our model using rate-based analysis tools, demonstrate why our proposed mechanism of dendritic computation cannot be detected under this analysis and suggest an alternative based on plateau timings. The interaction of plateau events in dendritic trees is, according to our argument, an elementary principle of neural computation which implies the need for a fundamental change of perspective on the computational function of neurons.


Author(s):  
Peiji Liang ◽  
Si Wu ◽  
Fanji Gu
Keyword(s):  

2013 ◽  
Vol 18 (3) ◽  
pp. 325-345 ◽  
Author(s):  
Aija Anisimova ◽  
Maruta Avotina ◽  
Inese Bula

In this paper we consider a discrete dynamical system x n+1=βx n – g(x n ), n=0,1,..., arising as a discrete-time network of a single neuron, where 0 < β ≤ 1 is an internal decay rate, g is a signal function. A great deal of work has been done when the signal function is a sigmoid function. However, a signal function of McCulloch-Pitts nonlinearity described with a piecewise constant function is also useful in the modelling of neural networks. We investigate a more complicated step signal function (function that is similar to the sigmoid function) and we will prove some results about the periodicity of solutions of the considered difference equation. These results show the complexity of neurons behaviour.


1996 ◽  
Vol 07 (06) ◽  
pp. 671-687 ◽  
Author(s):  
AAPO HYVÄRINEN ◽  
ERKKI OJA

Recently, several neural algorithms have been introduced for Independent Component Analysis. Here we approach the problem from the point of view of a single neuron. First, simple Hebbian-like learning rules are introduced for estimating one of the independent components from sphered data. Some of the learning rules can be used to estimate an independent component which has a negative kurtosis, and the others estimate a component of positive kurtosis. Next, a two-unit system is introduced to estimate an independent component of any kurtosis. The results are then generalized to estimate independent components from non-sphered (raw) mixtures. To separate several independent components, a system of several neurons with linear negative feedback is used. The convergence of the learning rules is rigorously proven without any unnecessary hypotheses on the distributions of the independent components.


Author(s):  
FABRIZIO GABBIANI ◽  
STEVEN J. COX
Keyword(s):  

2016 ◽  
pp. 9-31
Author(s):  
Priscilla E. Greenwood ◽  
Lawrence M. Ward
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document