L1-Norm Support Vector Regression in Primal Based on Huber Loss Function

Author(s):  
Anagha Puthiyottil ◽  
S. Balasundaram ◽  
Yogendra Meena
2021 ◽  
Vol 111 ◽  
pp. 611-615
Author(s):  
Yuehao Bai ◽  
Hung Ho ◽  
Guillaume A. Pouliot ◽  
Joshua Shea

We provide large-sample distribution theory for support vector regression (SVR) with l1-norm along with error bars for the SVR regression coefficients. Although a classical Wald confidence interval obtains from our theory, its implementation inherently depends on the choice of a tuning parameter that scales the variance estimate and thus the width of the error bars. We address this shortcoming by further proposing an alternative large-sample inference method based on the inversion of a novel test statistic that displays competitive power properties and does not depend on the choice of a tuning parameter.


2019 ◽  
Vol 32 (15) ◽  
pp. 11285-11309 ◽  
Author(s):  
S. Balasundaram ◽  
Subhash Chandra Prasad

2014 ◽  
Vol 2014 ◽  
pp. 1-8
Author(s):  
Kuaini Wang ◽  
Jingjing Zhang ◽  
Yanyan Chen ◽  
Ping Zhong

Least squares support vector machine (LS-SVM) is a powerful tool for pattern classification and regression estimation. However, LS-SVM is sensitive to large noises and outliers since it employs the squared loss function. To solve the problem, in this paper, we propose an absolute deviation loss function to reduce the effects of outliers and derive a robust regression model termed as least absolute deviation support vector regression (LAD-SVR). The proposed loss function is not differentiable. We approximate it by constructing a smooth function and develop a Newton algorithm to solve the robust model. Numerical experiments on both artificial datasets and benchmark datasets demonstrate the robustness and effectiveness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document