scholarly journals Recruitment and Consolidation of Cell Assemblies for Words by Way of Hebbian Learning and Competition in a Multi-Layer Neural Network

2009 ◽  
Vol 1 (2) ◽  
pp. 160-176 ◽  
Author(s):  
Max Garagnani ◽  
Thomas Wennekers ◽  
Friedemann Pulvermüller
1995 ◽  
Vol 7 (6) ◽  
pp. 1191-1205 ◽  
Author(s):  
Colin Fyfe

A review is given of a new artificial neural network architecture in which the weights converge to the principal component subspace. The weights learn by only simple Hebbian learning yet require no clipping, normalization or weight decay. The net self-organizes using negative feedback of activation from a set of "interneurons" to the input neurons. By allowing this negative feedback from the interneurons to act on other interneurons we can introduce the necessary asymmetry to cause convergence to the actual principal components. Simulations and analysis confirm such convergence.


1991 ◽  
Vol 3 (4) ◽  
pp. 510-525 ◽  
Author(s):  
D. Horn ◽  
D. Sagi ◽  
M. Usher

We investigate binding within the framework of a model of excitatory and inhibitory cell assemblies that form an oscillating neural network. Our model is composed of two such networks that are connected through their inhibitory neurons. The excitatory cell assemblies represent memory patterns. The latter have different meanings in the two networks, representing two different attributes of an object, such as shape and color. The networks segment an input that contains mixtures of such pairs into staggered oscillations of the relevant activities. Moreover, the phases of the oscillating activities representing the two attributes in each pair lock with each other to demonstrate binding. The system works very well for two inputs, but displays faulty correlations when the number of objects is larger than two. In other words, the network conjoins attributes of different objects, thus showing the phenomenon of “illusory conjunctions,” as in human vision.


Author(s):  
Manuel Samuelides ◽  
Simon Thorpe ◽  
Emmanuel Veneau

2020 ◽  
Vol 12 (2) ◽  
pp. 1-20
Author(s):  
Sourav Das ◽  
Anup Kumar Kolya

In this work, the authors extract information on distinct baseline features from a popular open-source music corpus and explore new recognition techniques by applying unsupervised Hebbian learning techniques on our single-layer neural network using the same dataset. They show the detailed empirical findings to simulate how such an algorithm can help a single layer feedforward network in training for music feature learning as patterns. The unsupervised training algorithm enhances the proposed neural network to achieve an accuracy of 90.36% for successful music feature detection. For comparative analysis against similar tasks, they put their results with the likes of several previous benchmark works. They further discuss the limitations and thorough error analysis of the work. They hope to discover and gather new information about this particular classification technique and performance, also further understand future potential directions that could improve the art of computational music feature recognition.


1990 ◽  
Vol 64 (2) ◽  
pp. 171-176 ◽  
Author(s):  
A. Carlson

2010 ◽  
Vol 22 (8) ◽  
pp. 2059-2085 ◽  
Author(s):  
Daniel Bush ◽  
Andrew Philippides ◽  
Phil Husbands ◽  
Michael O'Shea

Rate-coded Hebbian learning, as characterized by the BCM formulation, is an established computational model of synaptic plasticity. Recently it has been demonstrated that changes in the strength of synapses in vivo can also depend explicitly on the relative timing of pre- and postsynaptic firing. Computational modeling of this spike-timing-dependent plasticity (STDP) has demonstrated that it can provide inherent stability or competition based on local synaptic variables. However, it has also been demonstrated that these properties rely on synaptic weights being either depressed or unchanged by an increase in mean stochastic firing rates, which directly contradicts empirical data. Several analytical studies have addressed this apparent dichotomy and identified conditions under which distinct and disparate STDP rules can be reconciled with rate-coded Hebbian learning. The aim of this research is to verify, unify, and expand on these previous findings by manipulating each element of a standard computational STDP model in turn. This allows us to identify the conditions under which this plasticity rule can replicate experimental data obtained using both rate and temporal stimulation protocols in a spiking recurrent neural network. Our results describe how the relative scale of mean synaptic weights and their dependence on stochastic pre- or postsynaptic firing rates can be manipulated by adjusting the exact profile of the asymmetric learning window and temporal restrictions on spike pair interactions respectively. These findings imply that previously disparate models of rate-coded autoassociative learning and temporally coded heteroassociative learning, mediated by symmetric and asymmetric connections respectively, can be implemented in a single network using a single plasticity rule. However, we also demonstrate that forms of STDP that can be reconciled with rate-coded Hebbian learning do not generate inherent synaptic competition, and thus some additional mechanism is required to guarantee long-term input-output selectivity.


2005 ◽  
Vol 65-66 ◽  
pp. 647-652 ◽  
Author(s):  
Andreas Knoblauch

2013 ◽  
Vol 2013 ◽  
pp. 1-17 ◽  
Author(s):  
Elisa Magosso ◽  
Filippo Cona ◽  
Mauro Ursino

Exposure to synchronous but spatially disparate auditory and visual stimuli produces a perceptual shift of sound location towards the visual stimulus (ventriloquism effect). After adaptation to a ventriloquism situation, enduring sound shift is observed in the absence of the visual stimulus (ventriloquism aftereffect). Experimental studies report opposing results as to aftereffect generalization across sound frequencies varying from aftereffect being confined to the frequency used during adaptation to aftereffect generalizing across some octaves. Here, we present an extension of a model of visual-auditory interaction we previously developed. The new model is able to simulate the ventriloquism effect and, via Hebbian learning rules, the ventriloquism aftereffect and can be used to investigate aftereffect generalization across frequencies. The model includes auditory neurons coding both for the spatial and spectral features of the auditory stimuli and mimicking properties of biological auditory neurons. The model suggests that different extent of aftereffect generalization across frequencies can be obtained by changing the intensity of the auditory stimulus that induces different amounts of activation in the auditory layer. The model provides a coherent theoretical framework to explain the apparently contradictory results found in the literature. Model mechanisms and hypotheses are discussed in relation to neurophysiological and psychophysical data.


2021 ◽  
Author(s):  
Nikolaos Chrysanthidis ◽  
Florian Fiebig ◽  
Anders Lansner ◽  
Pawel Herman

Episodic memory is the recollection of past personal experiences associated with particular times and places. This kind of memory is commonly subject to loss of contextual information or "semantization", which gradually decouples the encoded memory items from their associated contexts while transforming them into semantic or gist-like representations. Novel extensions to the classical Remember/Know behavioral paradigm attribute the loss of episodicity to multiple exposures of an item in different contexts. Despite recent advancements explaining semantization at a behavioral level, the underlying neural mechanisms remain poorly understood. In this study, we suggest and evaluate a novel hypothesis proposing that Bayesian-Hebbian synaptic plasticity mechanisms might cause semantization of episodic memory. We implement a cortical spiking neural network model with a Bayesian-Hebbian learning rule called Bayesian Confidence Propagation Neural Network (BCPNN), which captures the semantization phenomenon and offers a mechanistic explanation for it. Encoding items across multiple contexts leads to item-context decoupling akin to semantization. We compare BCPNN plasticity with the more commonly used spike-timing dependent plasticity (STDP) learning rule in the same episodic memory task. Unlike BCPNN, STDP does not explain the decontextualization process. We also examine how selective plasticity modulation of isolated salient events may enhance preferential retention and resistance to semantization. Our model reproduces important features of episodicity on behavioral timescales under various biological constraints whilst also offering a novel neural and synaptic explanation for semantization, thereby casting new light on the interplay between episodic and semantic memory processes.


Sign in / Sign up

Export Citation Format

Share Document