scholarly journals Optimal learning rates for least squares regularized regression with unbounded sampling

2011 ◽  
Vol 27 (1) ◽  
pp. 55-67 ◽  
Author(s):  
Cheng Wang ◽  
Ding-Xuan Zhou
2020 ◽  
Vol 19 (8) ◽  
pp. 4213-4225 ◽  
Author(s):  
Lucian Coroianu ◽  
◽  
Danilo Costarelli ◽  
Sorin G. Gal ◽  
Gianluca Vinti ◽  
...  

2017 ◽  
Vol 15 (06) ◽  
pp. 815-836 ◽  
Author(s):  
Yulong Zhao ◽  
Jun Fan ◽  
Lei Shi

The ranking problem aims at learning real-valued functions to order instances, which has attracted great interest in statistical learning theory. In this paper, we consider the regularized least squares ranking algorithm within the framework of reproducing kernel Hilbert space. In particular, we focus on analysis of the generalization error for this ranking algorithm, and improve the existing learning rates by virtue of an error decomposition technique from regression and Hoeffding’s decomposition for U-statistics.


2016 ◽  
Vol 14 (03) ◽  
pp. 449-477 ◽  
Author(s):  
Andreas Christmann ◽  
Ding-Xuan Zhou

Additive models play an important role in semiparametric statistics. This paper gives learning rates for regularized kernel-based methods for additive models. These learning rates compare favorably in particular in high dimensions to recent results on optimal learning rates for purely nonparametric regularized kernel-based quantile regression using the Gaussian radial basis function kernel, provided the assumption of an additive model is valid. Additionally, a concrete example is presented to show that a Gaussian function depending only on one variable lies in a reproducing kernel Hilbert space generated by an additive Gaussian kernel, but does not belong to the reproducing kernel Hilbert space generated by the multivariate Gaussian kernel of the same variance.


2010 ◽  
Vol 3 (9) ◽  
pp. 175-175 ◽  
Author(s):  
B. T Backus

Sign in / Sign up

Export Citation Format

Share Document