REGULARIZED LEAST SQUARE ALGORITHM WITH TWO KERNELS

Author(s):  
HONGWEI SUN ◽  
PING LIU

A new multi-kernel regression learning algorithm is studied in this paper. In our setting, the hypothesis space is generated by two Mercer kernels, thus it has stronger approximation ability than the single kernel case. We provide the mathematical foundation for this regularized learning algorithm. We obtain satisfying capacity-dependent error bounds and learning rates by the covering number method.

Author(s):  
Baoqi Su ◽  
Hong-Wei Sun

Loss function is the key element of a learning algorithm. Based on the regression learning algorithm with an offset, the coefficient-based regularization network with variance loss is proposed. The variance loss is different from the usual least quare loss, hinge loss and pinball loss, it induces a kind of samples cross empirical risk. Also, our coefficient-based regularization only relies on general kernel, i.e. the kernel is required to possess continuity, boundedness and satisfy some mild differentiability condition. These two characteristics bring essential difficulties to the theoretical analysis of this learning scheme. By the hypothesis space strategy and the error decomposition technique in [L. Shi, Learning theory estimates for coefficient-based regularized regression, Appl. Comput. Harmon. Anal. 34 (2013) 252–265], a capacity-dependent error analysis is completed, satisfactory error bound and learning rates are then derived under a very mild regularity condition on the regression function. Also, we find an effective way to deal with the learning problem with samples cross empirical risk.


Author(s):  
Yuanxin Ma ◽  
Hongwei Sun

In this paper, the regression learning algorithm with vector-valued RKHS is studied. We motivate the need for extending learning theory of scalar-valued functions and analze the learning performance. In this setting, the output data are from a Hilbert space [Formula: see text], the associated RKHS consists of functions with values lie in [Formula: see text]. By providing mathematical aspects of vector-valued integral operator [Formula: see text], the capacity independent error bounds and learning rates are derived by means of the integral operator technique.


2013 ◽  
Vol 2013 ◽  
pp. 1-7
Author(s):  
Yong-Li Xu ◽  
Di-Rong Chen ◽  
Han-Xiong Li

The study of multitask learning algorithms is one of very important issues. This paper proposes a least-square regularized regression algorithm for multi-task learning with hypothesis space being the union of a sequence of Hilbert spaces. The algorithm consists of two steps of selecting the optimal Hilbert space and searching for the optimal function. We assume that the distributions of different tasks are related to a set of transformations under which any Hilbert space in the hypothesis space is norm invariant. We prove that under the above assumption the optimal prediction function of every task is in the same Hilbert space. Based on this result, a pivotal error decomposition is founded, which can use samples of related tasks to bound excess error of the target task. We obtain an upper bound for the sample error of related tasks, and based on this bound, potential faster learning rates are obtained compared to single-task learning algorithms.


Author(s):  
Qin Guo ◽  
Peixin Ye

We consider the coefficient-based least squares regularized regression learning algorithm for the strongly and uniformly mixing samples. We obtain the capacity independent error bounds of the algorithm by means of the integral operator techniques. A standard assumption in theoretical study of learning algorithms for regression is the uniform boundedness of output sample values. We abandon this boundedness assumption and carry out the error analysis with output sample values satisfying a generalized moment hypothesis.


Author(s):  
Baohuai Sheng ◽  
Daohong Xiang

The capacity convergence rate for a kind of kernel regularized semi-supervised Laplacian learning algorithm is bounded with the convex analysis approach. The algorithm is a graph-based regression whose structure shares the feature of both the kernel regularized regression and the kernel regularized Laplacian ranking. It is shown that the kernel reproducing the hypothesis space has contributions to the clustering ability of the algorithm. If the scale parameters in the Gaussian weights are chosen properly, then the learning rate can be controlled by the unlabeled samples and the algorithm converges with the increase of the number of the unlabeled samples. The results of this paper show that choosing suitable structure the semi-supervised learning approach can not only increase the learning rate, but also finish the learning process by increasing the number of unlabeled samples.


Author(s):  
CHENG WANG ◽  
JIA CAI

In this paper, we investigate coefficient-based regularized least squares regression problem in a data dependent hypothesis space. The learning algorithm is implemented with samples drawn by unbounded sampling processes and the error analysis is performed by a stepping-stone technique. A new error decomposition technique is proposed for the error analysis. The regularization parameters in our setting provide much more flexibility and adaptivity. Sharp learning rates are addressed by means of l2-empirical covering numbers under a moment hypothesis condition.


Author(s):  
Meijian Zhang ◽  
Hongwei Sun

In this paper, we study the performance of kernel-based regression learning with non-iid sampling. The non-iid samples are drawn from different probability distributions with the same conditional distribution. A more general marginal distribution assumption is proposed. Under this assumption, the consistency of the regularization kernel network (RKN) and the coefficient regularization kernel network (CRKN) are proved. Satisfactory capacity independently error bounds and learning rates are derived by the techniques of integral operator.


2010 ◽  
Vol 22 (12) ◽  
pp. 3221-3235 ◽  
Author(s):  
Hongzhi Tong ◽  
Di-Rong Chen ◽  
Fenghong Yang

The selection of the penalty functional is critical for the performance of a regularized learning algorithm, and thus it deserves special attention. In this article, we present a least square regression algorithm based on lp-coefficient regularization. Comparing with the classical regularized least square regression, the new algorithm is different in the regularization term. Our primary focus is on the error analysis of the algorithm. An explicit learning rate is derived under some ordinary assumptions.


2011 ◽  
Vol 88 (7) ◽  
pp. 1471-1483 ◽  
Author(s):  
Yongquan Zhang ◽  
Feilong Cao ◽  
Zongben Xu

Sign in / Sign up

Export Citation Format

Share Document