Regularized Neural Networks: Some Convergence Rate Results

1995 ◽  
Vol 7 (6) ◽  
pp. 1225-1244 ◽  
Author(s):  
Valentina Corradi ◽  
Halbert White

In a recent paper, Poggio and Girosi (1990) proposed a class of neural networks obtained from the theory of regularization. Regularized networks are capable of approximating arbitrarily well any continuous function on a compactum. In this paper we consider in detail the learning problem for the one-dimensional case. We show that in the case of output data observed with noise, regularized networks are capable of learning and approximating (on compacta) elements of certain classes of Sobolev spaces, known as reproducing kernel Hilbert spaces (RKHS), at a nonparametric rate that optimally exploits the smoothness properties of the unknown mapping. In particular we show that the total squared error, given by the sum of the squared bias and the variance, will approach zero at a rate of n(-2m)/(2m+1), where m denotes the order of differentiability of the true unknown function. On the other hand, if the unknown mapping is a continuous function but does not belong to an RKHS, then there still exists a unique regularized solution, but this is no longer guaranteed to converge in mean square to a well-defined limit. Further, even if such a solution converges, the total squared error is bounded away from zero for all n sufficiently large.

2013 ◽  
Vol 11 (05) ◽  
pp. 1350020 ◽  
Author(s):  
HONGWEI SUN ◽  
QIANG WU

We study the asymptotical properties of indefinite kernel network with coefficient regularization and dependent sampling. The framework under investigation is different from classical kernel learning. Positive definiteness is not required by the kernel function and the samples are allowed to be weakly dependent with the dependence measured by a strong mixing condition. By a new kernel decomposition technique introduced in [27], two reproducing kernel Hilbert spaces and their associated kernel integral operators are used to characterize the properties and learnability of the hypothesis function class. Capacity independent error bounds and learning rates are deduced.


2014 ◽  
Vol 9 (4) ◽  
pp. 827-931 ◽  
Author(s):  
Joseph A. Ball ◽  
Dmitry S. Kaliuzhnyi-Verbovetskyi ◽  
Cora Sadosky ◽  
Victor Vinnikov

2009 ◽  
Vol 80 (3) ◽  
pp. 430-453 ◽  
Author(s):  
JOSEF DICK

AbstractWe give upper bounds on the Walsh coefficients of functions for which the derivative of order at least one has bounded variation of fractional order. Further, we also consider the Walsh coefficients of functions in periodic and nonperiodic reproducing kernel Hilbert spaces. A lower bound which shows that our results are best possible is also shown.


2017 ◽  
Vol 87 (2) ◽  
pp. 225-244 ◽  
Author(s):  
Rani Kumari ◽  
Jaydeb Sarkar ◽  
Srijan Sarkar ◽  
Dan Timotin

2018 ◽  
Vol 45 (2) ◽  
pp. 869-896 ◽  
Author(s):  
Parag Bobade ◽  
Suprotim Majumdar ◽  
Savio Pereira ◽  
Andrew J. Kurdila ◽  
John B. Ferris

Sign in / Sign up

Export Citation Format

Share Document