LOCAL LEARNING RULES AND SPARSE CODING IN NEURAL NETWORKS

1990 ◽  
pp. 145-150 ◽  
Author(s):  
Günther PALM
2018 ◽  
Vol 30 (1) ◽  
pp. 84-124 ◽  
Author(s):  
Cengiz Pehlevan ◽  
Anirvan M. Sengupta ◽  
Dmitri B. Chklovskii

Modeling self-organization of neural networks for unsupervised learning using Hebbian and anti-Hebbian plasticity has a long history in neuroscience. Yet derivations of single-layer networks with such local learning rules from principled optimization objectives became possible only recently, with the introduction of similarity matching objectives. What explains the success of similarity matching objectives in deriving neural networks with local learning rules? Here, using dimensionality reduction as an example, we introduce several variable substitutions that illuminate the success of similarity matching. We show that the full network objective may be optimized separately for each synapse using local learning rules in both the offline and online settings. We formalize the long-standing intuition of the rivalry between Hebbian and anti-Hebbian rules by formulating a min-max optimization problem. We introduce a novel dimensionality reduction objective using fractional matrix exponents. To illustrate the generality of our approach, we apply it to a novel formulation of dimensionality reduction combined with whitening. We confirm numerically that the networks with learning rules derived from principled objectives perform better than those with heuristic learning rules.


2019 ◽  
Author(s):  
Eric McVoy Dodds ◽  
Jesse Alexander Livezey ◽  
Michael Robert DeWeese

AbstractRetinal ganglion cell outputs are less correlated across space than are natural scenes, and it has been suggested that this decorrelation is performed in the retina in order to improve efficiency and to benefit processing later in the visual system. However, sparse coding, a successful computational model of primary visual cortex, is achievable under some conditions with highly correlated inputs: most sparse coding algorithms learn the well-known sparse features of natural images and can output sparse, high-fidelity codes with or without a preceding decorrelation stage of processing. We propose that sparse coding with biologically plausible local learning rules does require decorrelated inputs, providing a possible explanation for why whitening may be necessary early in the visual system.


1995 ◽  
Vol 7 (3) ◽  
pp. 507-517 ◽  
Author(s):  
Marco Idiart ◽  
Barry Berk ◽  
L. F. Abbott

Model neural networks can perform dimensional reductions of input data sets using correlation-based learning rules to adjust their weights. Simple Hebbian learning rules lead to an optimal reduction at the single unit level but result in highly redundant network representations. More complex rules designed to reduce or remove this redundancy can develop optimal principal component representations, but they are not very compelling from a biological perspective. Neurons in biological networks have restricted receptive fields limiting their access to the input data space. We find that, within this restricted receptive field architecture, simple correlation-based learning rules can produce surprisingly efficient reduced representations. When noise is present, the size of the receptive fields can be optimally tuned to maximize the accuracy of reconstructions of input data from a reduced representation.


2019 ◽  
Vol 33 (28) ◽  
pp. 1950343 ◽  
Author(s):  
Zhilian Yan ◽  
Youmei Zhou ◽  
Xia Huang ◽  
Jianping Zhou

This paper addresses the issue of finite-time boundedness for time-delay neural networks with external disturbances via weight learning. With the help of a group of inequalities and combining with the Lyapunov theory, weight learning rules are devised to ensure the neural networks to be finite-time bounded for the fixed connection weight matrix case and the fixed delayed connection weight matrix case, respectively. Sufficient conditions on the existence of the desired learning rules are presented in the form of linear matrix inequalities, which are easily verified by MATLAB software. It is shown that the proposed learning rules also guarantee the finite-time stability of the time-delay neural networks. Finally, a numerical example is employed to show the applicability of the devised weight learning rules.


Sign in / Sign up

Export Citation Format

Share Document