scholarly journals Interpolation and rates of convergence for a class of neural networks

2009 ◽  
Vol 33 (3) ◽  
pp. 1441-1456 ◽  
Author(s):  
Feilong Cao ◽  
Yongquan Zhang ◽  
Ze-Rong He
1999 ◽  
Vol 11 (3) ◽  
pp. 747-769 ◽  
Author(s):  
Terrence L. Fine ◽  
Sayandev Mukherjee

We revisit the oft-studied asymptotic (in sample size) behavior of the parameter or weight estimate returned by any member of a large family of neural network training algorithms. By properly accounting for the characteristic property of neural networks that their empirical and generalization errors possess multiple minima, we rigorously establish conditions under which the parameter estimate converges strongly into the set of minima of the generalization error. Convergence of the parameter estimate to a particular value cannot be guaranteed under our assumptions. We then evaluate the asymptotic distribution of the distance between the parameter estimate and its nearest neighbor among the set of minima of the generalization error. Results on this question have appeared numerous times and generally assert asymptotic normality, the conclusion expected from familiar statistical arguments concerned with maximum likelihood estimators. These conclusions are usually reached on the basis of somewhat informal calculations, although we shall see that the situation is somewhat delicate. The preceding results then provide a derivation of learning curves for generalization and empirical errors that leads to bounds on rates of convergence.


Sign in / Sign up

Export Citation Format

Share Document