Introducing Asymmetry into Interneuron Learning

1995 ◽  
Vol 7 (6) ◽  
pp. 1191-1205 ◽  
Author(s):  
Colin Fyfe

A review is given of a new artificial neural network architecture in which the weights converge to the principal component subspace. The weights learn by only simple Hebbian learning yet require no clipping, normalization or weight decay. The net self-organizes using negative feedback of activation from a set of "interneurons" to the input neurons. By allowing this negative feedback from the interneurons to act on other interneurons we can introduce the necessary asymmetry to cause convergence to the actual principal components. Simulations and analysis confirm such convergence.

2019 ◽  
Vol 29 (11) ◽  
pp. 113125 ◽  
Author(s):  
Harikrishnan Nellippallil Balakrishnan ◽  
Aditi Kathpalia ◽  
Snehanshu Saha ◽  
Nithin Nagaraj

2007 ◽  
Vol 54 (1) ◽  
pp. 61-76 ◽  
Author(s):  
Djaffar Ould Abdeslam ◽  
Patrice Wira ◽  
Jean Merckle ◽  
Damien Flieller ◽  
Yves-Andr Chapuis

Sign in / Sign up

Export Citation Format

Share Document