Hybrid SOM based cross-modal retrieval exploiting Hebbian learning

2021 ◽  
pp. 108014
Author(s):  
Parminder Kaur ◽  
Avleen Kaur Malhi ◽  
Husanbir Singh Pannu
Keyword(s):  
2021 ◽  
Vol 11 (4) ◽  
pp. 462
Author(s):  
Charles B. Delahunt ◽  
Pedro D. Maia ◽  
J. Nathan Kutz

Most organisms suffer neuronal damage throughout their lives, which can impair performance of core behaviors. Their neural circuits need to maintain function despite injury, which in particular requires preserving key system outputs. In this work, we explore whether and how certain structural and functional neuronal network motifs act as injury mitigation mechanisms. Specifically, we examine how (i) Hebbian learning, (ii) high levels of noise, and (iii) parallel inhibitory and excitatory connections contribute to the robustness of the olfactory system in the Manduca sexta moth. We simulate injuries on a detailed computational model of the moth olfactory network calibrated to data. The injuries are modeled on focal axonal swellings, a ubiquitous form of axonal pathology observed in traumatic brain injuries and other brain disorders. Axonal swellings effectively compromise spike train propagation along the axon, reducing the effective neural firing rate delivered to downstream neurons. All three of the network motifs examined significantly mitigate the effects of injury on readout neurons, either by reducing injury’s impact on readout neuron responses or by restoring these responses to pre-injury levels. These motifs may thus be partially explained by their value as adaptive mechanisms to minimize the functional effects of neural injury. More generally, robustness to injury is a vital design principle to consider when analyzing neural systems.


1999 ◽  
Vol 29 (4) ◽  
pp. 553-559 ◽  
Author(s):  
G. Pajares ◽  
J.M. Cruz ◽  
J.A. Lopez-Orozco

1995 ◽  
Vol 7 (6) ◽  
pp. 1191-1205 ◽  
Author(s):  
Colin Fyfe

A review is given of a new artificial neural network architecture in which the weights converge to the principal component subspace. The weights learn by only simple Hebbian learning yet require no clipping, normalization or weight decay. The net self-organizes using negative feedback of activation from a set of "interneurons" to the input neurons. By allowing this negative feedback from the interneurons to act on other interneurons we can introduce the necessary asymmetry to cause convergence to the actual principal components. Simulations and analysis confirm such convergence.


2004 ◽  
Vol 37 (3) ◽  
pp. 219-249 ◽  
Author(s):  
E.I. Papageorgiou ◽  
C.D. Stylios ◽  
P.P. Groumpos

2004 ◽  
Vol 7 (1) ◽  
pp. 35-36 ◽  
Author(s):  
BRIAN MACWHINNEY

Truscott and Sharwood Smith (henceforth T&SS) attempt to show how second language acquisition can occur without any learning. In their APT model, change depends only on the tuning of innate principles through the normal course of processing of L2. There are some features of their model that I find attractive. Specifically, their acceptance of the concepts of competition and activation strength brings them in line with standard processing accounts like the Competition Model (Bates and MacWhinney, 1982; MacWhinney, 1987, in press). At the same time, their reliance on parameters as the core constructs guiding learning leaves this model squarely within the framework of Chomsky's theory of Principles and Parameters (P&P). As such, it stipulates that the specific functional categories of Universal Grammar serve as the fundamental guide to both first and second language acquisition. Like other accounts in the P&P framework, this model attempts to view second language acquisition as involving no real learning beyond the deductive process of parameter-setting based on the detection of certain triggers. The specific innovation of the APT model is that changes in activation strength during processing function as the trigger to the setting of parameters. Unlike other P&P models, APT does not set parameters in an absolute fashion, allowing their activation weight to change by the processing of new input over time. The use of the concept of activation in APT is far more restricted than its use in connectionist models that allow for Hebbian learning, self-organizing features maps, or back-propagation.


2000 ◽  
Vol 12 (10) ◽  
pp. 2331-2353 ◽  
Author(s):  
H. Lipson ◽  
H. T. Siegelmann

This article introduces a method for clustering irregularly shaped data arrangements using high-order neurons. Complex analytical shapes are modeled by replacing the classic synaptic weight of the neuron by high-order tensors in homogeneous coordinates. In the first- and second-order cases, this neuron corresponds to a classic neuron and to an ellipsoidal-metric neuron. We show how high-order shapes can be formulated to follow the maximum-correlation activation principle and permit simple local Hebbian learning. We also demonstrate decomposition of spatial arrangements of data clusters, including very close and partially overlapping clusters, which are difficult to distinguish using classic neurons. Superior results are obtained for the Iris data.


Sign in / Sign up

Export Citation Format

Share Document