CONVERGENCE ANALYSIS OF COEFFICIENT-BASED REGULARIZATION UNDER MOMENT INCREMENTAL CONDITION

Author(s):  
CHENG WANG ◽  
JIA CAI

In this paper, we investigate coefficient-based regularized least squares regression problem in a data dependent hypothesis space. The learning algorithm is implemented with samples drawn by unbounded sampling processes and the error analysis is performed by a stepping-stone technique. A new error decomposition technique is proposed for the error analysis. The regularization parameters in our setting provide much more flexibility and adaptivity. Sharp learning rates are addressed by means of l2-empirical covering numbers under a moment hypothesis condition.

2013 ◽  
Vol 2013 ◽  
pp. 1-6
Author(s):  
Dao-Hong Xiang

This paper considers the ERM scheme for quantile regression. We conduct error analysis for this learning algorithm by means of a variance-expectation bound when a noise condition is satisfied for the underlying probability measure. The learning rates are derived by applying concentration techniques involving theℓ2-empirical covering numbers.


2013 ◽  
Vol 2013 ◽  
pp. 1-8 ◽  
Author(s):  
Jia Cai

We investigate a coefficient-based least squares regression problem with indefinite kernels from non-identical unbounded sampling processes. Here non-identical unbounded sampling means the samples are drawn independently but not identically from unbounded sampling processes. The kernel is not necessarily symmetric or positive semi-definite. This leads to additional difficulty in the error analysis. By introducing a suitable reproducing kernel Hilbert space (RKHS) and a suitable intermediate integral operator, elaborate analysis is presented by means of a novel technique for the sample error. This leads to satisfactory results.


Author(s):  
HONGWEI SUN ◽  
PING LIU

A new multi-kernel regression learning algorithm is studied in this paper. In our setting, the hypothesis space is generated by two Mercer kernels, thus it has stronger approximation ability than the single kernel case. We provide the mathematical foundation for this regularized learning algorithm. We obtain satisfying capacity-dependent error bounds and learning rates by the covering number method.


Author(s):  
Baoqi Su ◽  
Hong-Wei Sun

Loss function is the key element of a learning algorithm. Based on the regression learning algorithm with an offset, the coefficient-based regularization network with variance loss is proposed. The variance loss is different from the usual least quare loss, hinge loss and pinball loss, it induces a kind of samples cross empirical risk. Also, our coefficient-based regularization only relies on general kernel, i.e. the kernel is required to possess continuity, boundedness and satisfy some mild differentiability condition. These two characteristics bring essential difficulties to the theoretical analysis of this learning scheme. By the hypothesis space strategy and the error decomposition technique in [L. Shi, Learning theory estimates for coefficient-based regularized regression, Appl. Comput. Harmon. Anal. 34 (2013) 252–265], a capacity-dependent error analysis is completed, satisfactory error bound and learning rates are then derived under a very mild regularity condition on the regression function. Also, we find an effective way to deal with the learning problem with samples cross empirical risk.


2010 ◽  
Vol 22 (12) ◽  
pp. 3221-3235 ◽  
Author(s):  
Hongzhi Tong ◽  
Di-Rong Chen ◽  
Fenghong Yang

The selection of the penalty functional is critical for the performance of a regularized learning algorithm, and thus it deserves special attention. In this article, we present a least square regression algorithm based on lp-coefficient regularization. Comparing with the classical regularized least square regression, the new algorithm is different in the regularization term. Our primary focus is on the error analysis of the algorithm. An explicit learning rate is derived under some ordinary assumptions.


2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Yoonseok Shin

Among the recent data mining techniques available, the boosting approach has attracted a great deal of attention because of its effective learning algorithm and strong boundaries in terms of its generalization performance. However, the boosting approach has yet to be used in regression problems within the construction domain, including cost estimations, but has been actively utilized in other domains. Therefore, a boosting regression tree (BRT) is applied to cost estimations at the early stage of a construction project to examine the applicability of the boosting approach to a regression problem within the construction domain. To evaluate the performance of the BRT model, its performance was compared with that of a neural network (NN) model, which has been proven to have a high performance in cost estimation domains. The BRT model has shown results similar to those of NN model using 234 actual cost datasets of a building construction project. In addition, the BRT model can provide additional information such as the importance plot and structure model, which can support estimators in comprehending the decision making process. Consequently, the boosting approach has potential applicability in preliminary cost estimations in a building construction project.


2013 ◽  
Vol 108 ◽  
pp. 13-22 ◽  
Author(s):  
Shao-Gao Lv ◽  
Tie-Feng Ma ◽  
Liu Liu ◽  
Yun-Long Feng

2017 ◽  
Vol 15 (06) ◽  
pp. 815-836 ◽  
Author(s):  
Yulong Zhao ◽  
Jun Fan ◽  
Lei Shi

The ranking problem aims at learning real-valued functions to order instances, which has attracted great interest in statistical learning theory. In this paper, we consider the regularized least squares ranking algorithm within the framework of reproducing kernel Hilbert space. In particular, we focus on analysis of the generalization error for this ranking algorithm, and improve the existing learning rates by virtue of an error decomposition technique from regression and Hoeffding’s decomposition for U-statistics.


Author(s):  
JUAN HUANG ◽  
HONG CHEN ◽  
LUOQING LI

We propose a stochastic gradient descent algorithm for the least square regression with coefficient regularization. An explicit expression of the solution via sampling operator and empirical integral operator is derived. Learning rates are given in terms of the suitable choices of the step sizes and regularization parameters.


Sign in / Sign up

Export Citation Format

Share Document