Learning rates of multi-kernel regularized regression

2010 ◽  
Vol 140 (9) ◽  
pp. 2562-2568 ◽  
Author(s):  
Hong Chen ◽  
Luoqing Li
2013 ◽  
Vol 2013 ◽  
pp. 1-7
Author(s):  
Yong-Li Xu ◽  
Di-Rong Chen ◽  
Han-Xiong Li

The study of multitask learning algorithms is one of very important issues. This paper proposes a least-square regularized regression algorithm for multi-task learning with hypothesis space being the union of a sequence of Hilbert spaces. The algorithm consists of two steps of selecting the optimal Hilbert space and searching for the optimal function. We assume that the distributions of different tasks are related to a set of transformations under which any Hilbert space in the hypothesis space is norm invariant. We prove that under the above assumption the optimal prediction function of every task is in the same Hilbert space. Based on this result, a pivotal error decomposition is founded, which can use samples of related tasks to bound excess error of the target task. We obtain an upper bound for the sample error of related tasks, and based on this bound, potential faster learning rates are obtained compared to single-task learning algorithms.


2012 ◽  
Vol 42 (12) ◽  
pp. 1251-1262 ◽  
Author(s):  
HongZhi TONG ◽  
FengHong YANG ◽  
DiRong CHEN

2012 ◽  
Vol 35 (2) ◽  
pp. 174-181
Author(s):  
Feilong Cao ◽  
Joonwhoan Lee ◽  
Yongquan Zhang

Author(s):  
YONG-LI XU ◽  
DI-RONG CHEN

The study of regularized learning algorithms is a very important issue and functional data analysis extends classical methods. We establish the learning rates of the least square regularized regression algorithm in reproducing kernel Hilbert space for functional data. With the iteration method, we obtain fast learning rate for functional data. Our result is a natural extension for least square regularized regression algorithm when the dimension of input data is finite.


2005 ◽  
Vol 6 (2) ◽  
pp. 171-192 ◽  
Author(s):  
Qiang Wu ◽  
Yiming Ying ◽  
Ding-Xuan Zhou

Author(s):  
Baoqi Su ◽  
Hong-Wei Sun

Loss function is the key element of a learning algorithm. Based on the regression learning algorithm with an offset, the coefficient-based regularization network with variance loss is proposed. The variance loss is different from the usual least quare loss, hinge loss and pinball loss, it induces a kind of samples cross empirical risk. Also, our coefficient-based regularization only relies on general kernel, i.e. the kernel is required to possess continuity, boundedness and satisfy some mild differentiability condition. These two characteristics bring essential difficulties to the theoretical analysis of this learning scheme. By the hypothesis space strategy and the error decomposition technique in [L. Shi, Learning theory estimates for coefficient-based regularized regression, Appl. Comput. Harmon. Anal. 34 (2013) 252–265], a capacity-dependent error analysis is completed, satisfactory error bound and learning rates are then derived under a very mild regularity condition on the regression function. Also, we find an effective way to deal with the learning problem with samples cross empirical risk.


Sign in / Sign up

Export Citation Format

Share Document