convex loss function
Recently Published Documents


TOTAL DOCUMENTS

19
(FIVE YEARS 1)

H-INDEX

5
(FIVE YEARS 0)

2020 ◽  
Vol 357 (11) ◽  
pp. 7069-7091
Author(s):  
Kuaini Wang ◽  
Huimin Pei ◽  
Jinde Cao ◽  
Ping Zhong


Author(s):  
Shuhua Wang ◽  
Zhenlong Chen ◽  
Baohuai Sheng

It is known that to alleviate the performance deterioration caused by the outliers, the robust support vector (SV) regression is proposed, which is essentially a convex optimization problem associated with a non-convex loss function. The theory analysis for its performance cannot be finished by the usual convex analysis approach. For a robust SV regression algorithm containing two homotopy parameters, a non-convex method is developed with the quasiconvex analysis theory and the error estimate is given. An explicit convergence rate is provided, and the effect degree of outliers on the performance is quantitatively shown.



Author(s):  
Keerthiram Murugesan ◽  
Jaime Carbonell

This paper introduces self-paced task selection to multitask learning, where instances from more closely related tasks are selected in a progression of easier-to-harder tasks, to emulate an effective human education strategy, but applied to multitask machine learning. We develop the mathematical foundation for the approach based on iterative selection of the most appropriate task, learning the task parameters, and updating the shared knowledge, optimizing a new bi-convex loss function. This proposed method applies quite generally, including to multitask feature learning, multitask learning with alternating structure optimization, etc. Results show that in each of the above formulations self-paced (easier-to-harder) task selection outperforms the baseline version of these methods in all the experiments.



2014 ◽  
Vol 30 (2) ◽  
pp. 334-356 ◽  
Author(s):  
Kyungchul Song

This paper considers a decision maker who prefers to make a point decision when the object of interest is interval-identified with regular bounds. When the bounds are just identified along with known interval length, the local asymptotic minimax decision with respect to a symmetric convex loss function takes an obvious form: an efficient lower bound estimator plus the half of the known interval length. However, when the interval length or any nontrivial upper bound for the length is not known, the minimax approach suffers from triviality because the maximal risk is associated with infinitely long identified intervals. In this case, this paper proposes a local asymptotic minimax regret approach and shows that the midpoint between semiparametrically efficient bound estimators is optimal.



2013 ◽  
Vol 2013 ◽  
pp. 1-11 ◽  
Author(s):  
Hongzhi Tong ◽  
Di-Rong Chen ◽  
Fenghong Yang

We consider a family of classification algorithms generated from a regularization kernel scheme associated with -regularizer and convex loss function. Our main purpose is to provide an explicit convergence rate for the excess misclassification error of the produced classifiers. The error decomposition includes approximation error, hypothesis error, and sample error. We apply some novel techniques to estimate the hypothesis error and sample error. Learning rates are eventually derived under some assumptions on the kernel, the input space, the marginal distribution, and the approximation error.



2012 ◽  
Vol 28 (4) ◽  
pp. 1699-1714 ◽  
Author(s):  
Lichun Wang ◽  
Yuan You ◽  
Heng Lian




2011 ◽  
Vol 09 (04) ◽  
pp. 395-408 ◽  
Author(s):  
TING HU

We consider a fully online regression algorithm associated with a general convex loss function and Gaussian kernels with changing variances. Error analysis is conducted in a setting with samples drawn from a non-identical sequence of probability measures. When a fixed Gaussian is used, it was known that the learning ability of induced algorithms is weak. By allowing varying Gaussians, we show that the achieved learning rates can be of polynomial decays.



Sign in / Sign up

Export Citation Format

Share Document