Implementing hebbian learning in a rank-based neural network

Author(s):  
Manuel Samuelides ◽  
Simon Thorpe ◽  
Emmanuel Veneau
1995 ◽  
Vol 7 (6) ◽  
pp. 1191-1205 ◽  
Author(s):  
Colin Fyfe

A review is given of a new artificial neural network architecture in which the weights converge to the principal component subspace. The weights learn by only simple Hebbian learning yet require no clipping, normalization or weight decay. The net self-organizes using negative feedback of activation from a set of "interneurons" to the input neurons. By allowing this negative feedback from the interneurons to act on other interneurons we can introduce the necessary asymmetry to cause convergence to the actual principal components. Simulations and analysis confirm such convergence.


2020 ◽  
Vol 12 (2) ◽  
pp. 1-20
Author(s):  
Sourav Das ◽  
Anup Kumar Kolya

In this work, the authors extract information on distinct baseline features from a popular open-source music corpus and explore new recognition techniques by applying unsupervised Hebbian learning techniques on our single-layer neural network using the same dataset. They show the detailed empirical findings to simulate how such an algorithm can help a single layer feedforward network in training for music feature learning as patterns. The unsupervised training algorithm enhances the proposed neural network to achieve an accuracy of 90.36% for successful music feature detection. For comparative analysis against similar tasks, they put their results with the likes of several previous benchmark works. They further discuss the limitations and thorough error analysis of the work. They hope to discover and gather new information about this particular classification technique and performance, also further understand future potential directions that could improve the art of computational music feature recognition.


1990 ◽  
Vol 64 (2) ◽  
pp. 171-176 ◽  
Author(s):  
A. Carlson

2010 ◽  
Vol 22 (8) ◽  
pp. 2059-2085 ◽  
Author(s):  
Daniel Bush ◽  
Andrew Philippides ◽  
Phil Husbands ◽  
Michael O'Shea

Rate-coded Hebbian learning, as characterized by the BCM formulation, is an established computational model of synaptic plasticity. Recently it has been demonstrated that changes in the strength of synapses in vivo can also depend explicitly on the relative timing of pre- and postsynaptic firing. Computational modeling of this spike-timing-dependent plasticity (STDP) has demonstrated that it can provide inherent stability or competition based on local synaptic variables. However, it has also been demonstrated that these properties rely on synaptic weights being either depressed or unchanged by an increase in mean stochastic firing rates, which directly contradicts empirical data. Several analytical studies have addressed this apparent dichotomy and identified conditions under which distinct and disparate STDP rules can be reconciled with rate-coded Hebbian learning. The aim of this research is to verify, unify, and expand on these previous findings by manipulating each element of a standard computational STDP model in turn. This allows us to identify the conditions under which this plasticity rule can replicate experimental data obtained using both rate and temporal stimulation protocols in a spiking recurrent neural network. Our results describe how the relative scale of mean synaptic weights and their dependence on stochastic pre- or postsynaptic firing rates can be manipulated by adjusting the exact profile of the asymmetric learning window and temporal restrictions on spike pair interactions respectively. These findings imply that previously disparate models of rate-coded autoassociative learning and temporally coded heteroassociative learning, mediated by symmetric and asymmetric connections respectively, can be implemented in a single network using a single plasticity rule. However, we also demonstrate that forms of STDP that can be reconciled with rate-coded Hebbian learning do not generate inherent synaptic competition, and thus some additional mechanism is required to guarantee long-term input-output selectivity.


2009 ◽  
Vol 1 (2) ◽  
pp. 160-176 ◽  
Author(s):  
Max Garagnani ◽  
Thomas Wennekers ◽  
Friedemann Pulvermüller

2013 ◽  
Vol 2013 ◽  
pp. 1-17 ◽  
Author(s):  
Elisa Magosso ◽  
Filippo Cona ◽  
Mauro Ursino

Exposure to synchronous but spatially disparate auditory and visual stimuli produces a perceptual shift of sound location towards the visual stimulus (ventriloquism effect). After adaptation to a ventriloquism situation, enduring sound shift is observed in the absence of the visual stimulus (ventriloquism aftereffect). Experimental studies report opposing results as to aftereffect generalization across sound frequencies varying from aftereffect being confined to the frequency used during adaptation to aftereffect generalizing across some octaves. Here, we present an extension of a model of visual-auditory interaction we previously developed. The new model is able to simulate the ventriloquism effect and, via Hebbian learning rules, the ventriloquism aftereffect and can be used to investigate aftereffect generalization across frequencies. The model includes auditory neurons coding both for the spatial and spectral features of the auditory stimuli and mimicking properties of biological auditory neurons. The model suggests that different extent of aftereffect generalization across frequencies can be obtained by changing the intensity of the auditory stimulus that induces different amounts of activation in the auditory layer. The model provides a coherent theoretical framework to explain the apparently contradictory results found in the literature. Model mechanisms and hypotheses are discussed in relation to neurophysiological and psychophysical data.


2021 ◽  
Author(s):  
Nikolaos Chrysanthidis ◽  
Florian Fiebig ◽  
Anders Lansner ◽  
Pawel Herman

Episodic memory is the recollection of past personal experiences associated with particular times and places. This kind of memory is commonly subject to loss of contextual information or "semantization", which gradually decouples the encoded memory items from their associated contexts while transforming them into semantic or gist-like representations. Novel extensions to the classical Remember/Know behavioral paradigm attribute the loss of episodicity to multiple exposures of an item in different contexts. Despite recent advancements explaining semantization at a behavioral level, the underlying neural mechanisms remain poorly understood. In this study, we suggest and evaluate a novel hypothesis proposing that Bayesian-Hebbian synaptic plasticity mechanisms might cause semantization of episodic memory. We implement a cortical spiking neural network model with a Bayesian-Hebbian learning rule called Bayesian Confidence Propagation Neural Network (BCPNN), which captures the semantization phenomenon and offers a mechanistic explanation for it. Encoding items across multiple contexts leads to item-context decoupling akin to semantization. We compare BCPNN plasticity with the more commonly used spike-timing dependent plasticity (STDP) learning rule in the same episodic memory task. Unlike BCPNN, STDP does not explain the decontextualization process. We also examine how selective plasticity modulation of isolated salient events may enhance preferential retention and resistance to semantization. Our model reproduces important features of episodicity on behavioral timescales under various biological constraints whilst also offering a novel neural and synaptic explanation for semantization, thereby casting new light on the interplay between episodic and semantic memory processes.


2017 ◽  
Author(s):  
Ulises Pereira ◽  
Nicolas Brunel

AbstractThe attractor neural network scenario is a popular scenario for memory storage in association cortex, but there is still a large gap between models based on this scenario and experimental data. We study a recurrent network model in which both learning rules and distribution of stored patterns are inferred from distributions of visual responses for novel and familiar images in inferior temporal cortex (ITC). Unlike classical attractor neural network models, our model exhibits graded activity in retrieval states, with distributions of firing rates that are close to lognormal. Inferred learning rules are close to maximizing the number of stored patterns within a family of unsupervised Hebbian learning rules, suggesting learning rules in ITC are optimized to store a large number of attractor states. Finally, we show that there exists two types of retrieval states: one in which firing rates are constant in time, another in which firing rates fluctuate chaotically.


2018 ◽  
Author(s):  
Tiffany Hwu ◽  
Jeffrey L. Krichmar

AbstractThe ability to behave differently according to the situation is essential for survival in a dynamic environment. This requires past experiences to be encoded and retrieved alongside the contextual schemas in which they occurred. The complementary learning systems theory suggests that these schemas are acquired through gradual learning via the neocortex and rapid learning via the hippocampus. However, it has also been shown that new information matching a preexisting schema can bypass the gradual learning process and be acquired rapidly, suggesting that the separation of memories into schemas is useful for flexible learning. While there are theories of the role of schemas in memory consolidation, we lack a full understanding of the mechanisms underlying this function. For this reason, we created a biologically plausible neural network model of schema consolidation that studies several brain areas and their interactions. The model uses a rate-coded multilayer neural network with contrastive Hebbian learning to learn context-specific tasks. Our model suggests that the medial prefrontal cortex supports context-dependent behaviors by learning representations of schemas. Additionally, sparse random connections in the model from the ventral hippocampus to the hidden layers of the network gate neuronal activity depending on their involvement within the current schema, thus separating the representations of new and prior schemas. Contrastive Hebbian learning may function similarly to oscillations in the hippocampus, alternating between clamping and unclamping the output layer of the network to drive learning. Lastly, the model shows the vital role of neuromodulation, as a neuromodulatory area detects the certainty of whether new information is consistent with prior schemas and modulates the speed of memory encoding accordingly. Along with the insights that this model brings to the neurobiology of memory, it further provides a basis for creating context-dependent memories while preventing catastrophic forgetting in artificial neural networks.


Sign in / Sign up

Export Citation Format

Share Document