model neuron
Recently Published Documents


TOTAL DOCUMENTS

147
(FIVE YEARS 15)

H-INDEX

36
(FIVE YEARS 2)

2021 ◽  
Vol 118 (34) ◽  
pp. e2023381118
Author(s):  
Carl van Vreeswijk ◽  
Farzada Farkhooi

Dendrites play an essential role in the integration of highly fluctuating input in vivo into neurons across all nervous systems. Yet, they are often studied under conditions where inputs to dendrites are sparse. The dynamic properties of active dendrites facing in vivo–like fluctuating input thus remain elusive. In this paper, we uncover dynamics in a canonical model of a dendritic compartment with active calcium channels, receiving in vivo–like fluctuating input. In a single-compartment model of the active dendrite with fast calcium activation, we show noise-induced nonmonotonic behavior in the relationship of the membrane potential output, and mean input emerges. In contrast, noise can induce bistability in the input–output relation in the system with slowly activating calcium channels. Both phenomena are absent in a noiseless condition. Furthermore, we show that timescales of the emerging stochastic bistable dynamics extend far beyond a deterministic system due to stochastic switching between the solutions. A numerical simulation of a multicompartment model neuron shows that in the presence of in vivo–like synaptic input, the bistability uncovered in our analysis persists. Our results reveal that realistic synaptic input contributes to sustained dendritic nonlinearities, and synaptic noise is a significant component of dendritic input integration.


2021 ◽  
Author(s):  
Mara C.P. Rue ◽  
Leandro M Alonso ◽  
Eve Marder

Neural circuits must both function reliably and flexibly adapt to changes in their environment. We studied how both biological neurons and computational models respond to high potassium concentrations. Pyloric neurons of the crab stomatogastric ganglion (STG) initially become quiescent, then recover spiking activity in high potassium saline. The neurons retain this adaptation and recover more rapidly in subsequent high potassium applications, even after hours in control saline. We constructed a novel activity-dependent computational model that qualitatively captures these results. In this model, regulation of conductances is gated on and off depending on how far the neuron is from its target activity. This allows the model neuron to retain a trace of past perturbations even after it returns to its target activity in control conditions. Thus, perturbation, followed by recovery of normal activity, can hide cryptic changes in neuronal properties that are only revealed by subsequent perturbations.


Author(s):  
Maksims Zigunovs

The Alzheimer’s Disease main impact on the brain is the memory loss effect. Therefore, in the “neuron world” this makes a disorder of signal impulses and disconnects neurons that causes the neuron death and memory loss. The research main aim is to determine the average loss of signal and develop memory loss prediction models for artificial neuron network. The Izhikevich neural networking model is often used for constructing neuron neural electrical signal modeling. The neuron model signal rhythm and spikes are used as model neuron characteristics for understanding if the system is stable at certain moment and in time. In addition, the electrical signal parameters are used in similar way as they are used in a biological brain. During the research the neural network initial conditions are assumed to be randomly selected in specified the working neuron average sigma I parameters range.


2021 ◽  
Vol 17 (5) ◽  
pp. e1009015
Author(s):  
Toviah Moldwin ◽  
Menachem Kalmenson ◽  
Idan Segev

Synaptic clustering on neuronal dendrites has been hypothesized to play an important role in implementing pattern recognition. Neighboring synapses on a dendritic branch can interact in a synergistic, cooperative manner via nonlinear voltage-dependent mechanisms, such as NMDA receptors. Inspired by the NMDA receptor, the single-branch clusteron learning algorithm takes advantage of location-dependent multiplicative nonlinearities to solve classification tasks by randomly shuffling the locations of “under-performing” synapses on a model dendrite during learning (“structural plasticity”), eventually resulting in synapses with correlated activity being placed next to each other on the dendrite. We propose an alternative model, the gradient clusteron, or G-clusteron, which uses an analytically-derived gradient descent rule where synapses are "attracted to" or "repelled from" each other in an input- and location-dependent manner. We demonstrate the classification ability of this algorithm by testing it on the MNIST handwritten digit dataset and show that, when using a softmax activation function, the accuracy of the G-clusteron on the all-versus-all MNIST task (~85%) approaches that of logistic regression (~93%). In addition to the location update rule, we also derive a learning rule for the synaptic weights of the G-clusteron (“functional plasticity”) and show that a G-clusteron that utilizes the weight update rule can achieve ~89% accuracy on the MNIST task. We also show that a G-clusteron with both the weight and location update rules can learn to solve the XOR problem from arbitrary initial conditions.


Author(s):  
Mark B. Zimering ◽  
Vedad Delic ◽  
Bruce A. Citron

AbstractTraumatic brain injury and adult type 2 diabetes mellitus are each associated with the late occurrence of accelerated cognitive decline and Parkinson’s disease through unknown mechanisms. Previously, we reported increased circulating agonist autoantibodies targeting the 5-hydroxytryptamine 2A receptor in plasma from subsets of Parkinson’s disease, dementia, and diabetic patients suffering with microvascular complications. Here, we use a model neuron, mouse neuroblastoma (N2A) cell line, to test messenger RNA expression changes following brief exposure to traumatic brain injury and/or type 2 diabetes mellitus plasma harboring agonist 5-hydroxytryptamine 2A receptor autoantibodies. We now report involvement of the mitochondrial dysfunction pathway and Parkinson’s disease pathways in autoantibody-induced gene expression changes occurring in neuroblastoma cells. Functional gene categories upregulated significantly included cell death, cytoskeleton-microtubule function, actin polymerization or depolymerization, regulation of cell oxidative stress, mitochondrial function, immune function, protein metabolism, and vesicle function. Gene categories significantly downregulated included microtubule function, cell adhesion, neurotransmitter release, dopamine metabolism synaptic plasticity, maintenance of neuronal differentiation, mitochondrial function, and cell signaling. Taken together, these results suggest that agonist 5-hydroxytryptamine receptor autoantibodies (which increase in Parkinson’s disease and other forms of neurodegeneration) mediate a coordinating program of gene expression changes in a model neuron which predispose to neuro-apoptosis and are linked to human neurodegenerative diseases pathways.


2020 ◽  
Author(s):  
Toviah Moldwin ◽  
Menachem Kalmenson ◽  
Idan Segev

Synaptic clustering on neuronal dendrites has been hypothesized to play an important role in implementing pattern recognition. Neighboring synapses on a dendritic branch can interact in a synergistic, cooperative manner via the nonlinear voltage-dependence of NMDA receptors. Inspired by the NMDA receptor, the single-branch clusteron learning algorithm (Mel 1991) takes advantage of location-dependent multiplicative nonlinearities to solve classification tasks by randomly shuffling the locations of “under-performing” synapses on a model dendrite during learning (“structural plasticity”), eventually resulting in synapses with correlated activity being placed next to each other on the dendrite. We propose an alternative model, the gradient clusteron, or G-clusteron, which uses an analytically-derived gradient descent rule where synapses are “attracted to” or “repelled from” each other in an input- and location- dependent manner. We demonstrate the classification ability of this algorithm by testing it on the MNIST handwritten digit dataset and show that, when using a softmax activation function, the accuracy of the G-clusteron on the All-vs-All MNIST task (85.9%) approaches that of logistic regression (92.6%). In addition to the synaptic location update plasticity rule, we also derive a learning rule for the synaptic weights of the G-clusteron (“functional plasticity”) and show that the G-clusteron with both plasticity rules can achieve 89.5% accuracy on the MNIST task and can learn to solve the XOR problem from arbitrary initial conditions.


2020 ◽  
Vol 88 (11) ◽  
pp. 918-923
Author(s):  
George H. Rutherford ◽  
Zach D. Mobille ◽  
Jordan Brandt-Trainer ◽  
Rosangela Follmann ◽  
Epaminondas Rosa

2020 ◽  
Author(s):  
Lars Keuninckx ◽  
Axel Cleeremans

We show how anomalous time reversal of stimuli and their associated responses can exist in very small connectionist models. The networks these models exist of, are built using a dynamical toy model neuron, and adhere to a minimal set of biologically plausible properties. The appearance of a “ghost” response, temporally and spatially located in between responses caused by actual stimuli, as in the Phi phenomenon, is demonstrated in a similar small network, where it is caused by priming and long-distance feedforward paths. We then demonstrate that the Color Phi phenomenon can be present in an Echo State Network, a dynamical recurrent neural network, markedly without explicitly training for the presence of the effect. Our results suggest that similar illusions are likely to be obtainable for any system, artificial or biological in nature, with minimal neuron-like properties, and are thus merely artifacts of the inherent dynamical and nonlinear behavior and of suchsystems.


2019 ◽  
Author(s):  
G. Marsat

ABSTRACTThe identity of sensory stimuli is encoded in the spatio-temporal patterns of responses of the neural population. For stimuli to be discriminated reliably, differences in population responses must be accurately decoded by downstream networks. Several methods to compare the pattern of responses and their differences have been used by neurophysiologist to characterize the accuracy of the sensory responses studied. Among the most widely used analysis, we note methods based on Euclidian distances or on spike metric distance such as the one proposed by van Rossum. Methods based on artificial neural network and machine learning (such as self-organizing maps) have also gain popularity to recognize and/or classify specific input patterns. In this brief report, we first compare these three strategies using dataset from 3 different sensory systems. We show that the input-weighting procedure inherent to artificial neural network allows the extraction of the information most relevant to the discrimination task and thus the method performs particularly well. To combine the ease of use and rapidity of methods such as spike metric distances and the advantage of weighting the inputs, we propose a measure based on geometric distances were each dimension is weighted proportionally to how informative it is. In each dimension, the overlap between the distributions of responses to the two stimuli is quantified using the Kullback-Leibler divergence measure. We show that the result of this Kullback-Leibler-weighted spike train distance (KLW distance) analysis performs as well or better than the artificial neural network we tested and outperforms the more traditional spike distance metrics. We applied information theoretic analysis to Leaky-Integrate-and-Fire model neuron responses and compare their encoding accuracy with the discrimination accuracy quantified through these distance metrics to show the high degree of correlation between the results of the two approaches for quantifying coding performance. We argue that our proposed measure provides the flexibility, ease of use sought by neurophysiologist while providing a more powerful way to extract the relevant information than more traditional methods.


Sign in / Sign up

Export Citation Format

Share Document