Inference for Support Vector Regression under ℓ1 Regularization

2021 ◽  
Vol 111 ◽  
pp. 611-615
Author(s):  
Yuehao Bai ◽  
Hung Ho ◽  
Guillaume A. Pouliot ◽  
Joshua Shea

We provide large-sample distribution theory for support vector regression (SVR) with l1-norm along with error bars for the SVR regression coefficients. Although a classical Wald confidence interval obtains from our theory, its implementation inherently depends on the choice of a tuning parameter that scales the variance estimate and thus the width of the error bars. We address this shortcoming by further proposing an alternative large-sample inference method based on the inversion of a novel test statistic that displays competitive power properties and does not depend on the choice of a tuning parameter.

2014 ◽  
Vol 2014 ◽  
pp. 1-10
Author(s):  
Yufang Liu ◽  
Bin Jiang ◽  
Hui Yi ◽  
Cuimei Bo

While support vector regression is widely used as both a function approximating tool and a residual generator for nonlinear system fault isolation, a drawback for this method is the freedom in selecting model parameters. Moreover, for samples with discordant distributing complexities, the selection of reasonable parameters is even impossible. To alleviate this problem we introduce the method of flexible support vector regression (F-SVR), which is especially suited for modelling complicated sample distributions, as it is free from parameters selection. Reasonable parameters for F-SVR are automatically generated given a sample distribution. Lastly, we apply this method in the analysis of the fault isolation of high frequency power supplies, where satisfactory results have been obtained.


Author(s):  
Ya-Fen Ye ◽  
Chao Ying ◽  
Yue-Xiang Jiang ◽  
Chun-Na Li ◽  
◽  
...  

In this study, we focus on the feature selection problem in regression, and propose a new version of L1support vector regression (L1-SVR), known as L1-norm least squares support vector regression (L1-LSSVR). The alternating direction method of multipliers (ADMM), a method from the augmented Lagrangian family, is used to solve L1-LSSVR. The sparse solution of L1-LSSVR can realize feature selection effectively. Furthermore, L1-LSSVR is decomposed into a sequence of simpler problems by the ADMM algorithm, resulting in faster training speed. The experimental results demonstrate that L1-LSSVR is not only as effective as L1-SVR, LSSVR, and SVR in both feature selection and regression, but also much faster than L1-SVR and SVR.


2016 ◽  
Vol 136 (12) ◽  
pp. 898-907 ◽  
Author(s):  
Joao Gari da Silva Fonseca Junior ◽  
Hideaki Ohtake ◽  
Takashi Oozeki ◽  
Kazuhiko Ogimoto

2020 ◽  
Author(s):  
Avinash Wesley ◽  
Bharat Mantha ◽  
Ajay Rajeev ◽  
Aimee Taylor ◽  
Mohit Dholi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document