scholarly journals Least Square Regularized Regression for Multitask Learning

2013 ◽  
Vol 2013 ◽  
pp. 1-7
Author(s):  
Yong-Li Xu ◽  
Di-Rong Chen ◽  
Han-Xiong Li

The study of multitask learning algorithms is one of very important issues. This paper proposes a least-square regularized regression algorithm for multi-task learning with hypothesis space being the union of a sequence of Hilbert spaces. The algorithm consists of two steps of selecting the optimal Hilbert space and searching for the optimal function. We assume that the distributions of different tasks are related to a set of transformations under which any Hilbert space in the hypothesis space is norm invariant. We prove that under the above assumption the optimal prediction function of every task is in the same Hilbert space. Based on this result, a pivotal error decomposition is founded, which can use samples of related tasks to bound excess error of the target task. We obtain an upper bound for the sample error of related tasks, and based on this bound, potential faster learning rates are obtained compared to single-task learning algorithms.

Author(s):  
YONG-LI XU ◽  
DI-RONG CHEN

The study of regularized learning algorithms is a very important issue and functional data analysis extends classical methods. We establish the learning rates of the least square regularized regression algorithm in reproducing kernel Hilbert space for functional data. With the iteration method, we obtain fast learning rate for functional data. Our result is a natural extension for least square regularized regression algorithm when the dimension of input data is finite.


Author(s):  
HONGWEI SUN ◽  
PING LIU

A new multi-kernel regression learning algorithm is studied in this paper. In our setting, the hypothesis space is generated by two Mercer kernels, thus it has stronger approximation ability than the single kernel case. We provide the mathematical foundation for this regularized learning algorithm. We obtain satisfying capacity-dependent error bounds and learning rates by the covering number method.


2005 ◽  
Vol 6 (2) ◽  
pp. 171-192 ◽  
Author(s):  
Qiang Wu ◽  
Yiming Ying ◽  
Ding-Xuan Zhou

Author(s):  
Baohuai Sheng ◽  
Daohong Xiang

The capacity convergence rate for a kind of kernel regularized semi-supervised Laplacian learning algorithm is bounded with the convex analysis approach. The algorithm is a graph-based regression whose structure shares the feature of both the kernel regularized regression and the kernel regularized Laplacian ranking. It is shown that the kernel reproducing the hypothesis space has contributions to the clustering ability of the algorithm. If the scale parameters in the Gaussian weights are chosen properly, then the learning rate can be controlled by the unlabeled samples and the algorithm converges with the increase of the number of the unlabeled samples. The results of this paper show that choosing suitable structure the semi-supervised learning approach can not only increase the learning rate, but also finish the learning process by increasing the number of unlabeled samples.


2006 ◽  
Vol 18 (10) ◽  
pp. 2509-2528 ◽  
Author(s):  
Yoshua Bengio ◽  
Martin Monperrus ◽  
Hugo Larochelle

We claim and present arguments to the effect that a large class of manifold learning algorithms that are essentially local and can be framed as kernel learning algorithms will suffer from the curse of dimensionality, at the dimension of the true underlying manifold. This observation invites an exploration of nonlocal manifold learning algorithms that attempt to discover shared structure in the tangent planes at different positions. A training criterion for such an algorithm is proposed, and experiments estimating a tangent plane prediction function are presented, showing its advantages with respect to local manifold learning algorithms: it is able to generalize very far from training data (on learning handwritten character image rotations), where local nonparametric methods fail.


Sign in / Sign up

Export Citation Format

Share Document