scholarly journals Event-based pattern detection in active dendrites

2019 ◽  
Author(s):  
Johannes Leugering ◽  
Pascal Nieters ◽  
Gordon Pipa

AbstractMany behavioural tasks require an animal to integrate information on a slow timescale that can exceed hundreds of milliseconds. How this is realized by neurons with membrane time constants on the order of tens of milliseconds or less remains an open question. We show, how the interaction of two kinds of events within the dendritic tree, excitatory postsynaptic potentials and locally generated dendritic plateau potentials, can allow a single neuron to detect specific sequences of spiking input on such slow timescales. Our conceptual model reveals, how the morphology of a neuron’s dendritic tree determines its computational function, which can range from a simple logic gate to the gradual integration of evidence to the detection of complex spatio-temporal spike-sequences on long timescales. As an example, we illustrate in a simulated navigation task how this mechanism can even allow individual neurons to reliably detect specific movement trajectories with high tolerance for timing variability. We relate our results to conclusive findings in neurobiology and discuss implications for both experimental and theoretical neuroscience.Author SummaryThe recognition of patterns that span multiple timescales is a critical function of the brain. This is a conceptual challenge for all neuron models that rely on the passive integration of synaptic inputs and are therefore limited to the rigid millisecond timescale of post-synaptic currents. However, detailed biological measurements recently revealed that single neurons actively generate localized plateau potentials within the dendritic tree that can last hundreds of milliseconds. Here, we investigate single-neuron computation in a model that adheres to these findings but is intentionally simple. Our analysis reveals how plateaus act as memory traces, and their interaction as defined by the dendritic morphology of a neuron gives rise to complex non-linear computation. We demonstrate how this mechanism enables individual neurons to solve difficult, behaviorally relevant tasks that are commonly studied on the network-level, such as the detection of variable input sequences or the integration of evidence on long timescales. We also characterize computation in our model using rate-based analysis tools, demonstrate why our proposed mechanism of dendritic computation cannot be detected under this analysis and suggest an alternative based on plateau timings. The interaction of plateau events in dendritic trees is, according to our argument, an elementary principle of neural computation which implies the need for a fundamental change of perspective on the computational function of neurons.

2010 ◽  
pp. 399-422 ◽  
Author(s):  
Frances Skinner ◽  
Fernanda Saraga
Keyword(s):  

2011 ◽  
Vol 23 (10) ◽  
pp. 2626-2682
Author(s):  
James Ting-Ho Lo

A biologically plausible low-order model (LOM) of biological neural networks is proposed. LOM is a recurrent hierarchical network of models of dendritic nodes and trees; spiking and nonspiking neurons; unsupervised, supervised covariance and accumulative learning mechanisms; feedback connections; and a scheme for maximal generalization. These component models are motivated and necessitated by making LOM learn and retrieve easily without differentiation, optimization, or iteration, and cluster, detect, and recognize multiple and hierarchical corrupted, distorted, and occluded temporal and spatial patterns. Four models of dendritic nodes are given that are all described as a hyperbolic polynomial that acts like an exclusive-OR logic gate when the model dendritic nodes input two binary digits. A model dendritic encoder that is a network of model dendritic nodes encodes its inputs such that the resultant codes have an orthogonality property. Such codes are stored in synapses by unsupervised covariance learning, supervised covariance learning, or unsupervised accumulative learning, depending on the type of postsynaptic neuron. A masking matrix for a dendritic tree, whose upper part comprises model dendritic encoders, enables maximal generalization on corrupted, distorted, and occluded data. It is a mathematical organization and idealization of dendritic trees with overlapped and nested input vectors. A model nonspiking neuron transmits inhibitory graded signals to modulate its neighboring model spiking neurons. Model spiking neurons evaluate the subjective probability distribution (SPD) of the labels of the inputs to model dendritic encoders and generate spike trains with such SPDs as firing rates. Feedback connections from the same or higher layers with different numbers of unit-delay devices reflect different signal traveling times, enabling LOM to fully utilize temporally and spatially associated information. Biological plausibility of the component models is discussed. Numerical examples are given to demonstrate how LOM operates in retrieving, generalizing, and unsupervised and supervised learning.


Author(s):  
Peiji Liang ◽  
Si Wu ◽  
Fanji Gu
Keyword(s):  

1999 ◽  
Vol 81 (5) ◽  
pp. 1999-2016 ◽  
Author(s):  
Edward L. Bartlett ◽  
Philip H. Smith

Anatomic, intrinsic, and synaptic properties of dorsal and ventral division neurons in rat medial geniculate body. Presently little is known about what basic synaptic and cellular mechanisms are employed by thalamocortical neurons in the two main divisions of the auditory thalamus to elicit their distinct responses to sound. Using intracellular recording and labeling methods, we characterized anatomic features, membrane properties, and synaptic inputs of thalamocortical neurons in the dorsal (MGD) and ventral (MGV) divisions in brain slices of rat medial geniculate body. Quantitative analysis of dendritic morphology demonstrated that tufted neurons in both divisions had shorter dendrites, smaller dendritic tree areas, more profuse branching, and a greater dendritic polarization compared with stellate neurons, which were only found in MGD. Tufted neuron dendritic polarization was not as strong or consistent as earlier Golgi studies suggested. MGV and MGD cells had similar intrinsic properties except for an increased prevalence of a depolarizing sag potential in MGV neurons. The sag was the only intrinsic property correlated with cell morphology, seen only in tufted neurons in either division. Many MGV and MGD neurons received excitatory and inhibitory inferior colliculus (IC) inputs (designated IN/EX or EX/IN depending on excitation/inhibition sequence). However, a significant number only received excitatory inputs (EX/O) and a few only inhibitory (IN/O). Both MGV and MGD cells displayed similar proportions of response combinations, but suprathreshold EX/O responses only were observed in tufted neurons. Excitatory and inhibitory postsynaptic potentials (EPSPs and IPSPs) had multiple distinguishable amplitude levels implying convergence. Excitatory inputs activated α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) and N-methyl-d-aspartate (NMDA) receptors the relative contributions of which were variable. For IN/EX cells with suprathreshold inputs, first-spike timing was independent of membrane potential unlike that of EX/O cells. Stimulation of corticothalamic (CT) and thalamic reticular nucleus (TRN) axons evoked a GABAA IPSP, EPSP, GABAB IPSP sequence in most neurons with both morphologies in both divisions. TRN IPSPs and CT EPSPs were graded in amplitude, again suggesting convergence. CT inputs activated AMPA and NMDA receptors. The NMDA component of both IC and CT inputs had an unusual voltage dependence with a detectable dl-2-amino-5-phosphonovaleric acid-sensitive component even below −70 mV. First-spike latencies of CT evoked action potentials were sensitive to membrane potential regardless of whether the TRN IPSP was present. Overall, our in vitro data indicate that reported regional differences in the in vivo responses of MGV and MGD cells to auditory stimuli are not well correlated with major differences in intrinsic membrane features or synaptic responses between cell types.


2013 ◽  
Vol 18 (3) ◽  
pp. 325-345 ◽  
Author(s):  
Aija Anisimova ◽  
Maruta Avotina ◽  
Inese Bula

In this paper we consider a discrete dynamical system x n+1=βx n – g(x n ), n=0,1,..., arising as a discrete-time network of a single neuron, where 0 < β ≤ 1 is an internal decay rate, g is a signal function. A great deal of work has been done when the signal function is a sigmoid function. However, a signal function of McCulloch-Pitts nonlinearity described with a piecewise constant function is also useful in the modelling of neural networks. We investigate a more complicated step signal function (function that is similar to the sigmoid function) and we will prove some results about the periodicity of solutions of the considered difference equation. These results show the complexity of neurons behaviour.


1996 ◽  
Vol 07 (06) ◽  
pp. 671-687 ◽  
Author(s):  
AAPO HYVÄRINEN ◽  
ERKKI OJA

Recently, several neural algorithms have been introduced for Independent Component Analysis. Here we approach the problem from the point of view of a single neuron. First, simple Hebbian-like learning rules are introduced for estimating one of the independent components from sphered data. Some of the learning rules can be used to estimate an independent component which has a negative kurtosis, and the others estimate a component of positive kurtosis. Next, a two-unit system is introduced to estimate an independent component of any kurtosis. The results are then generalized to estimate independent components from non-sphered (raw) mixtures. To separate several independent components, a system of several neurons with linear negative feedback is used. The convergence of the learning rules is rigorously proven without any unnecessary hypotheses on the distributions of the independent components.


2017 ◽  
Author(s):  
Dezhe Z. Jin ◽  
Ting Zhao ◽  
David L. Hunt ◽  
Rachel P. Tillage ◽  
Ching-Lung Hsu ◽  
...  

AbstractNeurons perform computations by integrating inputs from thousands of synapses – mostly in the dendritic tree – to drive action potential firing in the axon. One fruitful approach to understanding this process is to record from neurons using patch-clamp electrodes, fill the recorded neuron with a substance that allows subsequent staining, reconstruct the three-dimensional architecture of the dendrites, and use the resulting functional and structural data to develop computer models of dendritic integration. Accurately producing quantitative reconstructions of dendrites is typically a tedious process taking many hours of manual inspection and measurement. Here we present ShuTu, a new software package that facilitates accurate and efficient reconstruction of dendrites imaged using bright-field microscopy. The program operates in two steps: (1) automated identification of dendritic process, and (2) manual correction of errors in the automated reconstruction. This approach allows neurons with complex dendritic morphologies to be reconstructed rapidly and efficiently, thus facilitating the use of computer models to study dendritic structure-function relationships and the computations performed by single neurons.Significance StatementWe developed a software package – ShuTu – that integrates automated reconstruction of stained neurons with manual error correction. This package facilitates rapid reconstruction of the three-dimensional geometry of neuronal dendritic trees, often needed for computational simulations of the functional properties of these structures.


Author(s):  
Fabrizio Gabbiani ◽  
Steven James Cox
Keyword(s):  

2021 ◽  
pp. 1-18
Author(s):  
Ilenna Simone Jones ◽  
Konrad Paul Kording

Abstract Physiological experiments have highlighted how the dendrites of biological neurons can nonlinearly process distributed synaptic inputs. However, it is unclear how aspects of a dendritic tree, such as its branched morphology or its repetition of presynaptic inputs, determine neural computation beyond this apparent nonlinearity. Here we use a simple model where the dendrite is implemented as a sequence of thresholded linear units. We manipulate the architecture of this model to investigate the impacts of binary branching constraints and repetition of synaptic inputs on neural computation. We find that models with such manipulations can perform well on machine learning tasks, such as Fashion MNIST or Extended MNIST. We find that model performance on these tasks is limited by binary tree branching and dendritic asymmetry and is improved by the repetition of synaptic inputs to different dendritic branches. These computational experiments further neuroscience theory on how different dendritic properties might determine neural computation of clearly defined tasks.


Author(s):  
FABRIZIO GABBIANI ◽  
STEVEN J. COX
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document