Learning in Feed-Forward Artificial Neural Networks I
Keyword(s):
The view of artificial neural networks as adaptive systems has lead to the development of ad-hoc generic procedures known as learning rules. The first of these is the Perceptron Rule (Rosenblatt, 1962), useful for single layer feed-forward networks and linearly separable problems. Its simplicity and beauty, and the existence of a convergence theorem made it a basic departure point in neural learning algorithms. This algorithm is a particular case of the Widrow-Hoff or delta rule (Widrow & Hoff, 1960), applicable to continuous networks with no hidden layers with an error function that is quadratic in the parameters.
2015 ◽
Vol 760
◽
pp. 771-776
2001 ◽
Vol 32
(1)
◽
pp. 21-30
◽
Keyword(s):
Keyword(s):