Informational characteristics of neural networks capable of associative learning based on Hebbian plasticity

1993 ◽  
Vol 4 (4) ◽  
pp. 495-536 ◽  
Author(s):  
A A Frolov ◽  
I P Murav'ev
2021 ◽  
Vol 443 ◽  
pp. 222-234
Author(s):  
Jia Liu ◽  
Wenhua Zhang ◽  
Fang Liu ◽  
Liang Xiao

2022 ◽  
Author(s):  
Alberto Lazari ◽  
Piergiorgio Salvan ◽  
Michiel Cottaar ◽  
Daniel Papp ◽  
Matthew FS Rushworth ◽  
...  

Synaptic plasticity is required for learning and follows Hebb's Rule, the computational principle underpinning associative learning. In recent years, a complementary type of brain plasticity has been identified in myelinated axons, which make up the majority of brain's white matter. Like synaptic plasticity, myelin plasticity is required for learning, but it is unclear whether it is Hebbian or whether it follows different rules. Here, we provide evidence that white matter plasticity operates following Hebb's Rule in humans. Across two experiments, we find that co-stimulating cortical areas to induce Hebbian plasticity leads to relative increases in cortical excitability and associated increases in a myelin marker within the stimulated fiber bundle. We conclude that Hebbian plasticity extends beyond synaptic changes, and can be observed in human white matter fibers.


2004 ◽  
Vol 17 (10) ◽  
pp. 1495
Author(s):  
Misha Tsodyks ◽  
Yeal Adini ◽  
Dov Sagi

2021 ◽  
pp. 1-29
Author(s):  
Shanshan Qin ◽  
Nayantara Mudur ◽  
Cengiz Pehlevan

We propose a novel biologically plausible solution to the credit assignment problem motivated by observations in the ventral visual pathway and trained deep neural networks. In both, representations of objects in the same category become progressively more similar, while objects belonging to different categories become less similar. We use this observation to motivate a layer-specific learning goal in a deep network: each layer aims to learn a representational similarity matrix that interpolates between previous and later layers. We formulate this idea using a contrastive similarity matching objective function and derive from it deep neural networks with feedforward, lateral, and feedback connections and neurons that exhibit biologically plausible Hebbian and anti-Hebbian plasticity. Contrastive similarity matching can be interpreted as an energy-based learning algorithm, but with significant differences from others in how a contrastive function is constructed.


2017 ◽  
Vol 372 (1715) ◽  
pp. 20160155 ◽  
Author(s):  
Ada X. Yee ◽  
Yu-Tien Hsu ◽  
Lu Chen

Hebbian and homeostatic plasticity are two major forms of plasticity in the nervous system: Hebbian plasticity provides a synaptic basis for associative learning, whereas homeostatic plasticity serves to stabilize network activity. While achieving seemingly very different goals, these two types of plasticity interact functionally through overlapping elements in their respective mechanisms. Here, we review studies conducted in the mammalian central nervous system, summarize known circuit and molecular mechanisms of homeostatic plasticity, and compare these mechanisms with those that mediate Hebbian plasticity. We end with a discussion of ‘local’ homeostatic plasticity and the potential role of local homeostatic plasticity as a form of metaplasticity that modulates a neuron's future capacity for Hebbian plasticity. This article is part of the themed issue ‘Integrating Hebbian and homeostatic plasticity’.


2007 ◽  
Vol 362 (1479) ◽  
pp. 449-454 ◽  
Author(s):  
Stefano Ghirlanda ◽  
Magnus Enquist

We show that a simple network model of associative learning can reproduce three findings that arise from particular training and testing procedures in generalization experiments: the effect of (i) ‘errorless learning’, (ii) extinction testing on peak shift, and (iii) the central tendency effect. These findings provide a true test of the network model which was developed to account for other phenomena, and highlight the potential of neural networks to study the phenomena that depend on sequences of experiences with many stimuli. Our results suggest that at least some such phenomena, e.g. stimulus range effects, may derive from basic mechanisms of associative memory rather than from more complex memory processes.


2018 ◽  
Vol 30 (1) ◽  
pp. 84-124 ◽  
Author(s):  
Cengiz Pehlevan ◽  
Anirvan M. Sengupta ◽  
Dmitri B. Chklovskii

Modeling self-organization of neural networks for unsupervised learning using Hebbian and anti-Hebbian plasticity has a long history in neuroscience. Yet derivations of single-layer networks with such local learning rules from principled optimization objectives became possible only recently, with the introduction of similarity matching objectives. What explains the success of similarity matching objectives in deriving neural networks with local learning rules? Here, using dimensionality reduction as an example, we introduce several variable substitutions that illuminate the success of similarity matching. We show that the full network objective may be optimized separately for each synapse using local learning rules in both the offline and online settings. We formalize the long-standing intuition of the rivalry between Hebbian and anti-Hebbian rules by formulating a min-max optimization problem. We introduce a novel dimensionality reduction objective using fractional matrix exponents. To illustrate the generality of our approach, we apply it to a novel formulation of dimensionality reduction combined with whitening. We confirm numerically that the networks with learning rules derived from principled objectives perform better than those with heuristic learning rules.


2022 ◽  
Vol 5 (1) ◽  
Author(s):  
Takuya Isomura ◽  
Hideaki Shimazaki ◽  
Karl J. Friston

AbstractThis work considers a class of canonical neural networks comprising rate coding models, wherein neural activity and plasticity minimise a common cost function—and plasticity is modulated with a certain delay. We show that such neural networks implicitly perform active inference and learning to minimise the risk associated with future outcomes. Mathematical analyses demonstrate that this biological optimisation can be cast as maximisation of model evidence, or equivalently minimisation of variational free energy, under the well-known form of a partially observed Markov decision process model. This equivalence indicates that the delayed modulation of Hebbian plasticity—accompanied with adaptation of firing thresholds—is a sufficient neuronal substrate to attain Bayes optimal inference and control. We corroborated this proposition using numerical analyses of maze tasks. This theory offers a universal characterisation of canonical neural networks in terms of Bayesian belief updating and provides insight into the neuronal mechanisms underlying planning and adaptive behavioural control.


Sign in / Sign up

Export Citation Format

Share Document