scholarly journals UNIFORM ASYMPTOTICS AND CONFIDENCE REGIONS BASED ON THE ADAPTIVE LASSO WITH PARTIALLY CONSISTENT TUNING

2021 ◽  
pp. 1-26
Author(s):  
Nicolai Amann ◽  
Ulrike Schneider

We consider the adaptive Lasso estimator with componentwise tuning in the framework of a low-dimensional linear regression model. In our setting, at least one of the components is penalized at the rate of consistent model selection and certain components may not be penalized at all. We perform a detailed study of the consistency properties and the asymptotic distribution which includes the effects of componentwise tuning within a so-called moving-parameter framework. These results enable us to explicitly provide a set $\mathcal {M}$ such that every open superset acts as a confidence set with uniform asymptotic coverage equal to 1, whereas removing an arbitrarily small open set along the boundary yields a confidence set with uniform asymptotic coverage equal to 0. The shape of the set $\mathcal {M}$ depends on the regressor matrix as well as the deviations within the componentwise tuning parameters. Our findings can be viewed as a broad generalization of Pötscher and Schneider (2009, Journal of Statistical Planning and Inference 139, 2775–2790; 2010, Electronic Journal of Statistics 4, 334–360), who considered distributional properties and confidence intervals based on components of the adaptive Lasso estimator for the case of orthogonal regressors.

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Alexander Schmidt ◽  
Karsten Schweikert

Abstract In this paper, we propose a new approach to model structural change in cointegrating regressions using penalized regression techniques. First, we consider a setting with known breakpoint candidates and show that a modified adaptive lasso estimator can consistently estimate structural breaks in the intercept and slope coefficient of a cointegrating regression. Second, we extend our approach to a diverging number of breakpoint candidates and provide simulation evidence that timing and magnitude of structural breaks are consistently estimated. Third, we use the adaptive lasso estimation to design new tests for cointegration in the presence of multiple structural breaks, derive the asymptotic distribution of our test statistics and show that the proposed tests have power against the null of no cointegration. Finally, we use our new methodology to study the effects of structural breaks on the long-run PPP relationship.


2014 ◽  
Vol 1 (2) ◽  
pp. 1283-1312
Author(s):  
M. Abbas ◽  
A. Ilin ◽  
A. Solonen ◽  
J. Hakkarainen ◽  
E. Oja ◽  
...  

Abstract. In this work, we consider the Bayesian optimization (BO) approach for tuning parameters of complex chaotic systems. Such problems arise, for instance, in tuning the sub-grid scale parameterizations in weather and climate models. For such problems, the tuning procedure is generally based on a performance metric which measures how well the tuned model fits the data. This tuning is often a computationally expensive task. We show that BO, as a tool for finding the extrema of computationally expensive objective functions, is suitable for such tuning tasks. In the experiments, we consider tuning parameters of two systems: a simplified atmospheric model and a low-dimensional chaotic system. We show that BO is able to tune parameters of both the systems with a low number of objective function evaluations and without the need of any gradient information.


2015 ◽  
Vol 32 (1) ◽  
pp. 243-259 ◽  
Author(s):  
Anders Bredahl Kock

We show that the adaptive Lasso is oracle efficient in stationary and nonstationary autoregressions. This means that it estimates parameters consistently, selects the correct sparsity pattern, and estimates the coefficients belonging to the relevant variables at the same asymptotic efficiency as if only these had been included in the model from the outset. In particular, this implies that it is able to discriminate between stationary and nonstationary autoregressions and it thereby constitutes an addition to the set of unit root tests. Next, and important in practice, we show that choosing the tuning parameter by Bayesian Information Criterion (BIC) results in consistent model selection.However, it is also shown that the adaptive Lasso has no power against shrinking alternatives of the form c/T if it is tuned to perform consistent model selection. We show that if the adaptive Lasso is tuned to perform conservative model selection it has power even against shrinking alternatives of this form and compare it to the plain Lasso.


Author(s):  
Donald R. Jones ◽  
Joaquim R. R. A. Martins

Abstract Introduced in 1993, the DIRECT global optimization algorithm provided a fresh approach to minimizing a black-box function subject to lower and upper bounds on the variables. In contrast to the plethora of nature-inspired heuristics, DIRECT was deterministic and had only one hyperparameter (the desired accuracy). Moreover, the algorithm was simple, easy to implement, and usually performed well on low-dimensional problems (up to six variables). Most importantly, DIRECT balanced local and global search (exploitation vs. exploration) in a unique way: in each iteration, several points were sampled, some for global and some for local search. This approach eliminated the need for “tuning parameters” that set the balance between local and global search. However, the very same features that made DIRECT simple and conceptually attractive also created weaknesses. For example, it was commonly observed that, while DIRECT is often fast to find the basin of the global optimum, it can be slow to fine-tune the solution to high accuracy. In this paper, we identify several such weaknesses and survey the work of various researchers to extend DIRECT so that it performs better. All of the extensions show substantial improvement over DIRECT on various test functions. An outstanding challenge is to improve performance robustly across problems of different degrees of difficulty, ranging from simple (unimodal, few variables) to very hard (multimodal, sharply peaked, many variables). Opportunities for further improvement may lie in combining the best features of the different extensions.


2012 ◽  
Vol 13 (1) ◽  
pp. 71-85 ◽  
Author(s):  
Ulrike Schneider ◽  
Martin Wagner

Abstract This paper uses the adaptive Lasso estimator to determine variables important for economic growth. The adaptive Lasso estimator is a computationally very efficient procedure that simultaneously performs model selection and parameter estimation. The computational cost of this method is negligibly small compared with standard approaches in the growth regressions literature. We apply this method for a regional dataset for the European Union covering the 255 NUTS2 regions in the 27 member states over the period 1995-2005. The results suggest that initial GDP per capita (with an implied convergence speed of about 1.5% per annum), human capital ( proxied by the shares of highly and medium educated in the working age population), structural labor market characteristics (the initial unemployment rate and the initial activity rate of the low educated) as well as being a capital region are important for economic growth.


Author(s):  
Achim Ahrens ◽  
Christian B. Hansen ◽  
Mark E. Schaffer

In this article, we introduce lassopack, a suite of programs for regularized regression in Stata. lassopack implements lasso, square-root lasso, elastic net, ridge regression, adaptive lasso, and postestimation ordinary least squares. The methods are suitable for the high-dimensional setting, where the number of predictors p may be large and possibly greater than the number of observations, n. We offer three approaches for selecting the penalization (“tuning”) parameters: information criteria (implemented in lasso2), K-fold cross-validation and h-step-ahead rolling cross-validation for cross-section, panel, and time-series data (cvlasso), and theory-driven (“rigorous” or plugin) penalization for the lasso and square-root lasso for cross-section and panel data (rlasso). We discuss the theoretical framework and practical considerations for each approach. We also present Monte Carlo results to compare the performances of the penalization approaches.


2009 ◽  
Vol 139 (8) ◽  
pp. 2775-2790 ◽  
Author(s):  
Benedikt M. Pötscher ◽  
Ulrike Schneider

2020 ◽  
Vol 174 ◽  
pp. 107608
Author(s):  
Jasin Machkour ◽  
Michael Muma ◽  
Bastian Alt ◽  
Abdelhak M. Zoubir

Sign in / Sign up

Export Citation Format

Share Document