OPTIMAL HEBBIAN LEARNING RULES AND THE ROLE OF ASYMMETRY

1994 ◽  
Vol 05 (02) ◽  
pp. 123-129 ◽  
Author(s):  
D.A. STARIOLO ◽  
C. TSALLIS

We study the storage properties associated with generalized Hebbian learning rules which present four free parameters that allow for asymmetry. We also introduce two extra parameters in the post-synaptic potentials in order to further improve the critical capacity. Using signal-to-noise analysis, as well as computer simulations on an analog network, we discuss the performance of the rules for arbitrarily biased patterns and find that the critical storage capacity αc becomes maximal for a particular symmetric rule (αc diverges in the sparse coding limit). Departures from symmetry decrease αc butcan increase the robustness of the model.

1996 ◽  
Vol 07 (05) ◽  
pp. 655-664
Author(s):  
D. BOLLÉ ◽  
G. JONGEN ◽  
G.M. SHIM

Using a signal-to-noise analysis, the effects of nonlinear modulation of the Hebbian learning rule in the multi-class proximity problem are investigated. Both random classification and classification provided by a Gaussian and a binary teacher are treated. Analytic expressions are derived for the learning and generalization rates around an old and a new prototype. For the proximity problem with binary inputs but Q′-state outputs, it is shown that the optimal modulation is a combination of a hyperbolic tangent and a linear function. As an illustration, numerical results are presented for the two-class and the Q′=3 multi-class problem.


2010 ◽  
Vol 22 (7) ◽  
pp. 1812-1836 ◽  
Author(s):  
Laurent U. Perrinet

Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect to the statistics of natural scenes. Furthermore, it is believed that such an efficient coding is achieved using a competition across neurons so as to generate a sparse representation, that is, where a relatively small number of neurons are simultaneously active. Indeed, different models of sparse coding, coupled with Hebbian learning and homeostasis, have been proposed that successfully match the observed emergent response. However, the specific role of homeostasis in learning such sparse representations is still largely unknown. By quantitatively assessing the efficiency of the neural representation during learning, we derive a cooperative homeostasis mechanism that optimally tunes the competition between neurons within the sparse coding algorithm. We apply this homeostasis while learning small patches taken from natural images and compare its efficiency with state-of-the-art algorithms. Results show that while different sparse coding algorithms give similar coding results, the homeostasis provides an optimal balance for the representation of natural images within the population of neurons. Competition in sparse coding is optimized when it is fair. By contributing to optimizing statistical competition across neurons, homeostasis is crucial in providing a more efficient solution to the emergence of independent components.


2006 ◽  
Vol 971 ◽  
Author(s):  
Yasuhiro Munekata ◽  
Kota Washio ◽  
Takanori Suda ◽  
Naoyuki Hashimoto ◽  
Somei Ohnuki ◽  
...  

ABSTRACTOne of impotent materials issues of Ti-Cr-V based hydrogen storage alloys is to improve cyclic degradation of storage capacity, which has been assumed to be the effect of internal stress. We focused on the sub-micron structure of this material, which can be accumulated during cyclic use. We used 24Ti-36Cr-40V alloy for the specimens, after FZ melting. Powered samples were fabricated by mechanical grinding under Ar environment. Vacuum annealing was carried out for reducing residual stress and lattice defects. PCT properties were tested at 293 K under 4.5 MPa. XRD and TEM were carried out for important samples. In the first cycle, the annealing resulted in the increasing of storage capacity, but in the second cycle the improving was disappeared. Comparing microstructures with and without annealing, complex dislocation structures were observed after cyclic hydrogenation. It is notable that dislocation free structure was some time observed in the fine grains of less than 0.1 micron, which suggests the possibility of fine structure without defect accumulation.


2020 ◽  
Author(s):  
Francesca Schönsberg ◽  
Yasser Roudi ◽  
Alessandro Treves

We show that associative networks of threshold linear units endowed with Hebbian learning can operate closer to the Gardner optimal storage capacity than their binary counterparts and even surpass this bound. This is largely achieved through a sparsification of the retrieved patterns, which we analyze for theoretical and empirical distributions of activity. As reaching the optimal capacity via non-local learning rules like back-propagation requires slow and neurally implausible training procedures, our results indicate that one-shot self-organized Hebbian learning can be just as efficient.


2021 ◽  
Author(s):  
Petr Kaspar ◽  
Ivana Kolmasova ◽  
Ondrej Santolik ◽  
Martin Popek ◽  
Pavel Spurny ◽  
...  

<p><span>Sprites and halos are transient luminous events occurring above thunderclouds. They can be observed simultaneously or they can also appear individually. Circumstances leading to initiation of these events are still not completely understood. In order to clarify the role of lightning channels of causative lightning return strokes and the corresponding thundercloud charge structure, we have developed a new model of electric field amplitudes at halo/sprite altitudes. It consists of electrostatic and inductive components of the electromagnetic field generated by the lightning channel in free space at a height of 15 km. Above this altitude we solve Maxwell’s equations self-consistently including the nonlinear effects of heating and ionization/attachment of the electrons. At the same time, we investigate the role of a development of the thundercloud charge structure and related induced charges above the thundercloud. We show how these charges lead to the different distributions of the electric field at the initiation heights of the halos and sprites. We adjust free parameters of the model using observations of halos and sprites at the Nydek TLE observatory and using measurements of luminosity curves of the corresponding return strokes measured by an array of fast photometers. The latter measurements are also used to set the boundary conditions of the model.</span></p>


F1000Research ◽  
2017 ◽  
Vol 6 ◽  
pp. 1222 ◽  
Author(s):  
Gabriele Scheler

In this paper, we present data for the lognormal distributions of spike rates, synaptic weights and intrinsic excitability (gain) for neurons in various brain areas, such as auditory or visual cortex, hippocampus, cerebellum, striatum, midbrain nuclei. We find a remarkable consistency of heavy-tailed, specifically lognormal, distributions for rates, weights and gains in all brain areas examined. The difference between strongly recurrent and feed-forward connectivity (cortex vs. striatum and cerebellum), neurotransmitter (GABA (striatum) or glutamate (cortex)) or the level of activation (low in cortex, high in Purkinje cells and midbrain nuclei) turns out to be irrelevant for this feature. Logarithmic scale distribution of weights and gains appears to be a general, functional property in all cases analyzed. We then created a generic neural model to investigate adaptive learning rules that create and maintain lognormal distributions. We conclusively demonstrate that not only weights, but also intrinsic gains, need to have strong Hebbian learning in order to produce and maintain the experimentally attested distributions. This provides a solution to the long-standing question about the type of plasticity exhibited by intrinsic excitability.


Sign in / Sign up

Export Citation Format

Share Document