Introducing Asymmetry into Interneuron Learning
Keyword(s):
A review is given of a new artificial neural network architecture in which the weights converge to the principal component subspace. The weights learn by only simple Hebbian learning yet require no clipping, normalization or weight decay. The net self-organizes using negative feedback of activation from a set of "interneurons" to the input neurons. By allowing this negative feedback from the interneurons to act on other interneurons we can introduce the necessary asymmetry to cause convergence to the actual principal components. Simulations and analysis confirm such convergence.
2019 ◽
Vol 29
(11)
◽
pp. 113125
◽
2005 ◽
Vol 117
(4)
◽
pp. 2441-2442
2007 ◽
Vol 54
(1)
◽
pp. 61-76
◽