scholarly journals Efficiency of local learning rules in threshold-linear associative networks

2020 ◽  
Author(s):  
Francesca Schönsberg ◽  
Yasser Roudi ◽  
Alessandro Treves

We show that associative networks of threshold linear units endowed with Hebbian learning can operate closer to the Gardner optimal storage capacity than their binary counterparts and even surpass this bound. This is largely achieved through a sparsification of the retrieved patterns, which we analyze for theoretical and empirical distributions of activity. As reaching the optimal capacity via non-local learning rules like back-propagation requires slow and neurally implausible training procedures, our results indicate that one-shot self-organized Hebbian learning can be just as efficient.

1992 ◽  
Vol 4 (5) ◽  
pp. 703-711 ◽  
Author(s):  
Günther Palm

A simple relation between the storage capacity A for autoassociation and H for heteroassociation with a local learning rule is demonstrated: H = 2A. Both values are bounded by local learning bounds: A ≤ LA and H ≤ LH. LH = 2LA is evaluated numerically.


1996 ◽  
Vol 9 (7) ◽  
pp. 1213-1222 ◽  
Author(s):  
Jeong Dong-Gyu ◽  
Lee Soo-Young

1994 ◽  
Vol 05 (02) ◽  
pp. 123-129 ◽  
Author(s):  
D.A. STARIOLO ◽  
C. TSALLIS

We study the storage properties associated with generalized Hebbian learning rules which present four free parameters that allow for asymmetry. We also introduce two extra parameters in the post-synaptic potentials in order to further improve the critical capacity. Using signal-to-noise analysis, as well as computer simulations on an analog network, we discuss the performance of the rules for arbitrarily biased patterns and find that the critical storage capacity αc becomes maximal for a particular symmetric rule (αc diverges in the sparse coding limit). Departures from symmetry decrease αc butcan increase the robustness of the model.


2021 ◽  
Vol 126 (1) ◽  
Author(s):  
Francesca Schönsberg ◽  
Yasser Roudi ◽  
Alessandro Treves

2004 ◽  
Vol 7 (1) ◽  
pp. 35-36 ◽  
Author(s):  
BRIAN MACWHINNEY

Truscott and Sharwood Smith (henceforth T&SS) attempt to show how second language acquisition can occur without any learning. In their APT model, change depends only on the tuning of innate principles through the normal course of processing of L2. There are some features of their model that I find attractive. Specifically, their acceptance of the concepts of competition and activation strength brings them in line with standard processing accounts like the Competition Model (Bates and MacWhinney, 1982; MacWhinney, 1987, in press). At the same time, their reliance on parameters as the core constructs guiding learning leaves this model squarely within the framework of Chomsky's theory of Principles and Parameters (P&P). As such, it stipulates that the specific functional categories of Universal Grammar serve as the fundamental guide to both first and second language acquisition. Like other accounts in the P&P framework, this model attempts to view second language acquisition as involving no real learning beyond the deductive process of parameter-setting based on the detection of certain triggers. The specific innovation of the APT model is that changes in activation strength during processing function as the trigger to the setting of parameters. Unlike other P&P models, APT does not set parameters in an absolute fashion, allowing their activation weight to change by the processing of new input over time. The use of the concept of activation in APT is far more restricted than its use in connectionist models that allow for Hebbian learning, self-organizing features maps, or back-propagation.


F1000Research ◽  
2017 ◽  
Vol 6 ◽  
pp. 1222 ◽  
Author(s):  
Gabriele Scheler

In this paper, we present data for the lognormal distributions of spike rates, synaptic weights and intrinsic excitability (gain) for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas examined. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum), neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears to be a general, functional property in all cases analyzed. We then created a generic neural model to investigate adaptive learning rules that create and maintain lognormal distributions. We conclusively demonstrate that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This provides a solution to the long-standing question about the type of plasticity exhibited by intrinsic excitability.


Sensors ◽  
2020 ◽  
Vol 20 (2) ◽  
pp. 500 ◽  
Author(s):  
Sergey A. Lobov ◽  
Andrey V. Chernyshov ◽  
Nadia P. Krilova ◽  
Maxim O. Shamshin ◽  
Victor B. Kazantsev

One of the modern trends in the design of human–machine interfaces (HMI) is to involve the so called spiking neuron networks (SNNs) in signal processing. The SNNs can be trained by simple and efficient biologically inspired algorithms. In particular, we have shown that sensory neurons in the input layer of SNNs can simultaneously encode the input signal based both on the spiking frequency rate and on varying the latency in generating spikes. In the case of such mixed temporal-rate coding, the SNN should implement learning working properly for both types of coding. Based on this, we investigate how a single neuron can be trained with pure rate and temporal patterns, and then build a universal SNN that is trained using mixed coding. In particular, we study Hebbian and competitive learning in SNN in the context of temporal and rate coding problems. We show that the use of Hebbian learning through pair-based and triplet-based spike timing-dependent plasticity (STDP) rule is accomplishable for temporal coding, but not for rate coding. Synaptic competition inducing depression of poorly used synapses is required to ensure a neural selectivity in the rate coding. This kind of competition can be implemented by the so-called forgetting function that is dependent on neuron activity. We show that coherent use of the triplet-based STDP and synaptic competition with the forgetting function is sufficient for the rate coding. Next, we propose a SNN capable of classifying electromyographical (EMG) patterns using an unsupervised learning procedure. The neuron competition achieved via lateral inhibition ensures the “winner takes all” principle among classifier neurons. The SNN also provides gradual output response dependent on muscular contraction strength. Furthermore, we modify the SNN to implement a supervised learning method based on stimulation of the target classifier neuron synchronously with the network input. In a problem of discrimination of three EMG patterns, the SNN with supervised learning shows median accuracy 99.5% that is close to the result demonstrated by multi-layer perceptron learned by back propagation of an error algorithm.


1995 ◽  
Vol 7 (3) ◽  
pp. 507-517 ◽  
Author(s):  
Marco Idiart ◽  
Barry Berk ◽  
L. F. Abbott

Model neural networks can perform dimensional reductions of input data sets using correlation-based learning rules to adjust their weights. Simple Hebbian learning rules lead to an optimal reduction at the single unit level but result in highly redundant network representations. More complex rules designed to reduce or remove this redundancy can develop optimal principal component representations, but they are not very compelling from a biological perspective. Neurons in biological networks have restricted receptive fields limiting their access to the input data space. We find that, within this restricted receptive field architecture, simple correlation-based learning rules can produce surprisingly efficient reduced representations. When noise is present, the size of the receptive fields can be optimally tuned to maximize the accuracy of reconstructions of input data from a reduced representation.


Sign in / Sign up

Export Citation Format

Share Document