Inference robust to outliers with ℓ1-norm penalization
Keyword(s):
This paper considers inference in a linear regression model with outliers in which the number of outliers can grow with sample size while their proportion goes to 0. We propose a square-root lasso ℓ1-norm penalized estimator. We derive rates of convergence and establish asymptotic normality. Our estimator has the same asymptotic variance as the OLS estimator in the standard linear model. This enables us to build tests and confidence sets in the usual and simple manner. The proposed procedure is also computationally advantageous, it amounts to solving a convex optimization program. Overall, the suggested approach offers a practical robust alternative to the ordinary least squares estimator.
2018 ◽
Vol 48
(4)
◽
pp. 865-875
◽
2009 ◽
Vol 15
(53)
◽
pp. 1
1982 ◽
Vol 19
(A)
◽
pp. 225-239
◽
1987 ◽
Vol 16
(3)
◽
pp. 791-815
◽
2021 ◽
Vol 21
(1)
◽
pp. 123-140