Coefficient-based regularized regression with dependent and unbounded sampling

Author(s):  
Qin Guo ◽  
Peixin Ye

We consider the coefficient-based least squares regularized regression learning algorithm for the strongly and uniformly mixing samples. We obtain the capacity independent error bounds of the algorithm by means of the integral operator techniques. A standard assumption in theoretical study of learning algorithms for regression is the uniform boundedness of output sample values. We abandon this boundedness assumption and carry out the error analysis with output sample values satisfying a generalized moment hypothesis.

Author(s):  
Yuanxin Ma ◽  
Hongwei Sun

In this paper, the regression learning algorithm with vector-valued RKHS is studied. We motivate the need for extending learning theory of scalar-valued functions and analze the learning performance. In this setting, the output data are from a Hilbert space [Formula: see text], the associated RKHS consists of functions with values lie in [Formula: see text]. By providing mathematical aspects of vector-valued integral operator [Formula: see text], the capacity independent error bounds and learning rates are derived by means of the integral operator technique.


2011 ◽  
Vol 88 (7) ◽  
pp. 1471-1483 ◽  
Author(s):  
Yongquan Zhang ◽  
Feilong Cao ◽  
Zongben Xu

Author(s):  
HONGWEI SUN ◽  
PING LIU

A new multi-kernel regression learning algorithm is studied in this paper. In our setting, the hypothesis space is generated by two Mercer kernels, thus it has stronger approximation ability than the single kernel case. We provide the mathematical foundation for this regularized learning algorithm. We obtain satisfying capacity-dependent error bounds and learning rates by the covering number method.


Author(s):  
Baoqi Su ◽  
Hong-Wei Sun

Loss function is the key element of a learning algorithm. Based on the regression learning algorithm with an offset, the coefficient-based regularization network with variance loss is proposed. The variance loss is different from the usual least quare loss, hinge loss and pinball loss, it induces a kind of samples cross empirical risk. Also, our coefficient-based regularization only relies on general kernel, i.e. the kernel is required to possess continuity, boundedness and satisfy some mild differentiability condition. These two characteristics bring essential difficulties to the theoretical analysis of this learning scheme. By the hypothesis space strategy and the error decomposition technique in [L. Shi, Learning theory estimates for coefficient-based regularized regression, Appl. Comput. Harmon. Anal. 34 (2013) 252–265], a capacity-dependent error analysis is completed, satisfactory error bound and learning rates are then derived under a very mild regularity condition on the regression function. Also, we find an effective way to deal with the learning problem with samples cross empirical risk.


Author(s):  
Meijian Zhang ◽  
Hongwei Sun

In this paper, we study the performance of kernel-based regression learning with non-iid sampling. The non-iid samples are drawn from different probability distributions with the same conditional distribution. A more general marginal distribution assumption is proposed. Under this assumption, the consistency of the regularization kernel network (RKN) and the coefficient regularization kernel network (CRKN) are proved. Satisfactory capacity independently error bounds and learning rates are derived by the techniques of integral operator.


2014 ◽  
Vol 2014 ◽  
pp. 1-8
Author(s):  
Feng-Gong Lang ◽  
Xiao-Ping Xu

We mainly present the error analysis for two new cubic spline based methods; one is a lacunary interpolation method and the other is a very simple quasi interpolation method. The new methods are able to reconstruct a function and its first two derivatives from noisy function data. The explicit error bounds for the methods are given and proved. Numerical tests and comparisons are performed. Numerical results verify the efficiency of our methods.


2010 ◽  
Vol 22 (12) ◽  
pp. 3221-3235 ◽  
Author(s):  
Hongzhi Tong ◽  
Di-Rong Chen ◽  
Fenghong Yang

The selection of the penalty functional is critical for the performance of a regularized learning algorithm, and thus it deserves special attention. In this article, we present a least square regression algorithm based on lp-coefficient regularization. Comparing with the classical regularized least square regression, the new algorithm is different in the regularization term. Our primary focus is on the error analysis of the algorithm. An explicit learning rate is derived under some ordinary assumptions.


Sign in / Sign up

Export Citation Format

Share Document