scholarly journals Logarithmic distributions prove that intrinsic learning is Hebbian

F1000Research ◽  
2017 ◽  
Vol 6 ◽  
pp. 1222 ◽  
Author(s):  
Gabriele Scheler

In this paper, we present data for the lognormal distributions of spike rates, synaptic weights and intrinsic excitability (gain) for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas examined. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum), neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears to be a general, functional property in all cases analyzed. We then created a generic neural model to investigate adaptive learning rules that create and maintain lognormal distributions. We conclusively demonstrate that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This provides a solution to the long-standing question about the type of plasticity exhibited by intrinsic excitability.

F1000Research ◽  
2017 ◽  
Vol 6 ◽  
pp. 1222 ◽  
Author(s):  
Gabriele Scheler

In this paper, we document lognormal distributions for spike rates, synaptic weights and intrinsic excitability (gain) for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum), neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears as a functional property that is present everywhere.  Secondly, we created a generic neural model to show that Hebbian learning will create and maintain lognormal distributions. We could prove with the model that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This settles a long-standing question about the type of plasticity exhibited by intrinsic excitability.


2007 ◽  
Vol 19 (4) ◽  
pp. 885-909 ◽  
Author(s):  
Jochen Triesch

We propose a model of intrinsic plasticity for a continuous activation model neuron based on information theory. We then show how intrinsic and synaptic plasticity mechanisms interact and allow the neuron to discover heavy-tailed directions in the input. We also demonstrate that intrinsic plasticity may be an alternative explanation for the sliding threshold postulated in the BCM theory of synaptic plasticity. We present a theoretical analysis of the interaction of intrinsic plasticity with different Hebbian learning rules for the case of clustered inputs. Finally, we perform experiments on the “bars” problem, a popular nonlinear independent component analysis problem.


1995 ◽  
Vol 7 (3) ◽  
pp. 507-517 ◽  
Author(s):  
Marco Idiart ◽  
Barry Berk ◽  
L. F. Abbott

Model neural networks can perform dimensional reductions of input data sets using correlation-based learning rules to adjust their weights. Simple Hebbian learning rules lead to an optimal reduction at the single unit level but result in highly redundant network representations. More complex rules designed to reduce or remove this redundancy can develop optimal principal component representations, but they are not very compelling from a biological perspective. Neurons in biological networks have restricted receptive fields limiting their access to the input data space. We find that, within this restricted receptive field architecture, simple correlation-based learning rules can produce surprisingly efficient reduced representations. When noise is present, the size of the receptive fields can be optimally tuned to maximize the accuracy of reconstructions of input data from a reduced representation.


2000 ◽  
Vol 23 (4) ◽  
pp. 550-551
Author(s):  
Mikhail N. Zhadin

The absence of a clear influence of an animal's behavioral responses to Hebbian associative learning in the cerebral cortex requires some changes in the Hebbian learning rules. The participation of the brain monoaminergic systems in Hebbian associative learning is considered.


2007 ◽  
Vol 19 (6) ◽  
pp. 1468-1502 ◽  
Author(s):  
Răzvan V. Florian

The persistent modification of synaptic efficacy as a function of the relative timing of pre- and postsynaptic spikes is a phenomenon known as spike-timing-dependent plasticity (STDP). Here we show that the modulation of STDP by a global reward signal leads to reinforcement learning. We first derive analytically learning rules involving reward-modulated spike-timing-dependent synaptic and intrinsic plasticity, by applying a reinforcement learning algorithm to the stochastic spike response model of spiking neurons. These rules have several features common to plasticity mechanisms experimentally found in the brain. We then demonstrate in simulations of networks of integrate-and-fire neurons the efficacy of two simple learning rules involving modulated STDP. One rule is a direct extension of the standard STDP model (modulated STDP), and the other one involves an eligibility trace stored at each synapse that keeps a decaying memory of the relationships between the recent pairs of pre- and postsynaptic spike pairs (modulated STDP with eligibility trace). This latter rule permits learning even if the reward signal is delayed. The proposed rules are able to solve the XOR problem with both rate coded and temporally coded input and to learn a target output firing-rate pattern. These learning rules are biologically plausible, may be used for training generic artificial spiking neural networks, regardless of the neural model used, and suggest the experimental investigation in animals of the existence of reward-modulated STDP.


2012 ◽  
Vol 24 (5) ◽  
pp. 1271-1296 ◽  
Author(s):  
Michael Teichmann ◽  
Jan Wiltschut ◽  
Fred Hamker

The human visual system has the remarkable ability to largely recognize objects invariant of their position, rotation, and scale. A good interpretation of neurobiological findings involves a computational model that simulates signal processing of the visual cortex. In part, this is likely achieved step by step from early to late areas of visual perception. While several algorithms have been proposed for learning feature detectors, only few studies at hand cover the issue of biologically plausible learning of such invariance. In this study, a set of Hebbian learning rules based on calcium dynamics and homeostatic regulations of single neurons is proposed. Their performance is verified within a simple model of the primary visual cortex to learn so-called complex cells, based on a sequence of static images. As a result, the learned complex-cell responses are largely invariant to phase and position.


2000 ◽  
Vol 278 (5) ◽  
pp. R1267-R1274 ◽  
Author(s):  
Colleen M. Novak ◽  
Laura Smale ◽  
Antonio A. Nunez

Most mammals show daily rhythms in sleep and wakefulness controlled by the primary circadian pacemaker, the suprachiasmatic nucleus (SCN). Regardless of whether a species is diurnal or nocturnal, neural activity in the SCN and expression of the immediate-early gene product Fos increases during the light phase of the cycle. This study investigated daily patterns of Fos expression in brain areas outside the SCN in the diurnal rodent Arvicanthis niloticus. We specifically focused on regions related to sleep and arousal in animals kept on a 12:12-h light-dark cycle and killed at 1 and 5 h after both lights-on and lights-off. The ventrolateral preoptic area (VLPO), which contained cells immunopositive for galanin, showed a rhythm in Fos expression with a peak at zeitgeber time (ZT) 17 (with lights-on at ZT 0). Fos expression in the paraventricular thalamic nucleus (PVT) increased during the morning (ZT 1) but not the evening activity peak of these animals. No rhythm in Fos expression was found in the centromedial thalamic nucleus (CMT), but Fos expression in the CMT and PVT was positively correlated. A rhythm in Fos expression in the ventral tuberomammillary nucleus (VTM) was 180° out of phase with the rhythm in the VLPO. Furthermore, Fos production in histamine-immunoreactive neurons of the VTM cells increased at the light-dark transitions when A. niloticus show peaks of activity. The difference in the timing of the sleep-wake cycle in diurnal and nocturnal mammals may be due to changes in the daily pattern of activity in brain regions important in sleep and wakefulness such as the VLPO and the VTM.


2001 ◽  
Vol 5 (1) ◽  
pp. 1-31 ◽  
Author(s):  
George W. Evans ◽  
Seppo Honkapohja ◽  
Ramon Marimon

Inflation and the monetary financing of deficits are analyzed in a model in which the deficit is constrained to be less than a given fraction of a measure of aggregate market activity. Depending on parameter values, the model can have multiple steady states. Under adaptive learning with heterogeneous learning rules, there is convergence to a subset of these steady states. In some cases, a high-inflation constrained steady state will emerge. However, with a sufficiently tight fiscal constraint, the low-inflation steady state is globally stable. We provide experimental evidence in support of our theoretical results.


Sign in / Sign up

Export Citation Format

Share Document