LEARNING RATES OF REGULARIZED REGRESSION FOR FUNCTIONAL DATA

Author(s):  
YONG-LI XU ◽  
DI-RONG CHEN

The study of regularized learning algorithms is a very important issue and functional data analysis extends classical methods. We establish the learning rates of the least square regularized regression algorithm in reproducing kernel Hilbert space for functional data. With the iteration method, we obtain fast learning rate for functional data. Our result is a natural extension for least square regularized regression algorithm when the dimension of input data is finite.

2014 ◽  
Vol 644-650 ◽  
pp. 2286-2289
Author(s):  
Jin Luo

Ranking data points with respect to a given preference criterion is an example of a preference learning task. In this paper, we investigate the generalization performance of the regularized ranking algorithm associated with least square ranking loss in a reproducing kernel Hilbert space, and use the method of computing hold-out estimates for the proposed algorithm. Based on using the hold-out method, we obtain fast learning rate for this algorithm.


2017 ◽  
Vol 15 (06) ◽  
pp. 815-836 ◽  
Author(s):  
Yulong Zhao ◽  
Jun Fan ◽  
Lei Shi

The ranking problem aims at learning real-valued functions to order instances, which has attracted great interest in statistical learning theory. In this paper, we consider the regularized least squares ranking algorithm within the framework of reproducing kernel Hilbert space. In particular, we focus on analysis of the generalization error for this ranking algorithm, and improve the existing learning rates by virtue of an error decomposition technique from regression and Hoeffding’s decomposition for U-statistics.


2016 ◽  
Vol 14 (03) ◽  
pp. 449-477 ◽  
Author(s):  
Andreas Christmann ◽  
Ding-Xuan Zhou

Additive models play an important role in semiparametric statistics. This paper gives learning rates for regularized kernel-based methods for additive models. These learning rates compare favorably in particular in high dimensions to recent results on optimal learning rates for purely nonparametric regularized kernel-based quantile regression using the Gaussian radial basis function kernel, provided the assumption of an additive model is valid. Additionally, a concrete example is presented to show that a Gaussian function depending only on one variable lies in a reproducing kernel Hilbert space generated by an additive Gaussian kernel, but does not belong to the reproducing kernel Hilbert space generated by the multivariate Gaussian kernel of the same variance.


2019 ◽  
Vol 20 (6) ◽  
pp. 562-591
Author(s):  
Sonia Barahona ◽  
Pablo Centella ◽  
Ximo Gual-Arnau ◽  
M. Victoria Ibáñez ◽  
Amelia Simó

The aim of this article is to model an ordinal response variable in terms of vector-valued functional data included on a vector-valued reproducing kernel Hilbert space (RKHS). In particular, we focus on the vector-valued RKHS obtained when a geometrical object (body) is characterized by a current and on the ordinal regression model. A common way to solve this problem in functional data analysis is to express the data in the orthonormal basis given by decomposition of the covariance operator. But our data present very important differences with respect to the usual functional data setting. On the one hand, they are vector-valued functions, and on the other, they are functions in an RKHS with a previously defined norm. We propose to use three different bases: the orthonormal basis given by the kernel that defines the RKHS, a basis obtained from decomposition of the integral operator defined using the covariance function and a third basis that combines the previous two. The three approaches are compared and applied to an interesting problem: building a model to predict the fit of children's garment sizes, based on a 3D database of the Spanish child population. Our proposal has been compared with alternative methods that explore the performance of other classifiers (Support Vector Machine and [Formula: see text]-NN), and with the result of applying the classification method proposed in this work, from different characterizations of the objects (landmarks and multivariate anthropometric measurements instead of currents), obtaining in all these cases worst results.


2013 ◽  
Vol 2013 ◽  
pp. 1-7
Author(s):  
Yong-Li Xu ◽  
Di-Rong Chen ◽  
Han-Xiong Li

The study of multitask learning algorithms is one of very important issues. This paper proposes a least-square regularized regression algorithm for multi-task learning with hypothesis space being the union of a sequence of Hilbert spaces. The algorithm consists of two steps of selecting the optimal Hilbert space and searching for the optimal function. We assume that the distributions of different tasks are related to a set of transformations under which any Hilbert space in the hypothesis space is norm invariant. We prove that under the above assumption the optimal prediction function of every task is in the same Hilbert space. Based on this result, a pivotal error decomposition is founded, which can use samples of related tasks to bound excess error of the target task. We obtain an upper bound for the sample error of related tasks, and based on this bound, potential faster learning rates are obtained compared to single-task learning algorithms.


2014 ◽  
Vol 8 ◽  
pp. 7289-7300 ◽  
Author(s):  
Adji Achmad Rinaldo Fernandes ◽  
I Nyoman Budiantara ◽  
Bambang Widjanarko Otok ◽  
Suhartono

2012 ◽  
Vol 42 (12) ◽  
pp. 1251-1262 ◽  
Author(s):  
HongZhi TONG ◽  
FengHong YANG ◽  
DiRong CHEN

Author(s):  
Mengjuan Pang ◽  
Hongwei Sun

We study distributed learning with partial coefficients regularization scheme in a reproducing kernel Hilbert space (RKHS). The algorithm randomly partitions the sample set [Formula: see text] into [Formula: see text] disjoint sample subsets of equal size. In order to reduce the complexity of algorithms, we apply a partial coefficients regularization scheme to each sample subset to produce an output function, and average the individual output functions to get the final global estimator. The error bound in the [Formula: see text]-metric is deduced and the asymptotic convergence for this distributed learning with partial coefficients regularization is proved by the integral operator technique. Satisfactory learning rates are then derived under a standard regularity condition on the regression function, which reveals an interesting phenomenon that when [Formula: see text] and [Formula: see text] is small enough, this distributed learning has the same convergence rate with the algorithm processing the whole data in one single machine.


2005 ◽  
Vol 6 (2) ◽  
pp. 171-192 ◽  
Author(s):  
Qiang Wu ◽  
Yiming Ying ◽  
Ding-Xuan Zhou

Sign in / Sign up

Export Citation Format

Share Document