proximal gradient descent
Recently Published Documents


TOTAL DOCUMENTS

11
(FIVE YEARS 7)

H-INDEX

2
(FIVE YEARS 1)

Author(s):  
Robert Bassett ◽  
Julio Deride

We study statistical estimators computed using iterative optimization methods that are not run until completion. Classical results on maximum likelihood estimators (MLEs) assert that a one-step estimator (OSE), in which a single Newton-Raphson iteration is performed from a starting point with certain properties, is asymptotically equivalent to the MLE. We further develop these early-stopping results by deriving properties of one-step estimators defined by a single iteration of scaled proximal methods. Our main results show the asymptotic equivalence of the likelihood-based estimator and various one-step estimators defined by scaled proximal methods. By interpreting OSEs as the last of a sequence of iterates, our results provide insight on scaling numerical tolerance with sample size. Our setting contains scaled proximal gradient descent applied to certain composite models as a special case, making our results applicable to many problems of practical interest. Additionally, our results provide support for the utility of the scaled Moreau envelope as a statistical smoother by interpreting scaled proximal descent as a quasi-Newton method applied to the scaled Moreau envelope.


Author(s):  
Umberto Amato ◽  
Anestis Antoniadis ◽  
Italia De Feis ◽  
Irène Gijbels

AbstractNonparametric univariate regression via wavelets is usually implemented under the assumptions of dyadic sample size, equally spaced fixed sample points, and i.i.d. normal errors. In this work, we propose, study and compare some wavelet based nonparametric estimation methods designed to recover a one-dimensional regression function for data that not necessary possess the above requirements. These methods use appropriate regularizations by penalizing the decomposition of the unknown regression function on a wavelet basis of functions evaluated on the sampling design. Exploiting the sparsity of wavelet decompositions for signals belonging to homogeneous Besov spaces, we use some efficient proximal gradient descent algorithms, available in recent literature, for computing the estimates with fast computation times. Our wavelet based procedures, in both the standard and the robust regression case have favorable theoretical properties, thanks in large part to the separability nature of the (non convex) regularization they are based on. We establish asymptotic global optimal rates of convergence under weak conditions. It is known that such rates are, in general, unattainable by smoothing splines or other linear nonparametric smoothers. Lastly, we present several experiments to examine the empirical performance of our procedures and their comparisons with other proposals available in the literature. An interesting regression analysis of some real data applications using these procedures unambiguously demonstrate their effectiveness.


2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Jan Klosa ◽  
Noah Simon ◽  
Pål Olof Westermark ◽  
Volkmar Liebscher ◽  
Dörte Wittenburg

Abstract Background Statistical analyses of biological problems in life sciences often lead to high-dimensional linear models. To solve the corresponding system of equations, penalization approaches are often the methods of choice. They are especially useful in case of multicollinearity, which appears if the number of explanatory variables exceeds the number of observations or for some biological reason. Then, the model goodness of fit is penalized by some suitable function of interest. Prominent examples are the lasso, group lasso and sparse-group lasso. Here, we offer a fast and numerically cheap implementation of these operators via proximal gradient descent. The grid search for the penalty parameter is realized by warm starts. The step size between consecutive iterations is determined with backtracking line search. Finally, seagull -the R package presented here- produces complete regularization paths. Results Publicly available high-dimensional methylation data are used to compare seagull to the established R package SGL. The results of both packages enabled a precise prediction of biological age from DNA methylation status. But even though the results of seagull and SGL were very similar (R2 > 0.99), seagull computed the solution in a fraction of the time needed by SGL. Additionally, seagull enables the incorporation of weights for each penalized feature. Conclusions The following operators for linear regression models are available in seagull: lasso, group lasso, sparse-group lasso and Integrative LASSO with Penalty Factors (IPF-lasso). Thus, seagull is a convenient envelope of lasso variants.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3206
Author(s):  
Esmeide Leal ◽  
German Sanchez-Torres ◽  
John W. Branch

Denoising the point cloud is fundamental for reconstructing high quality surfaces with details in order to eliminate noise and outliers in the 3D scanning process. The challenges for a denoising algorithm are noise reduction and sharp features preservation. In this paper, we present a new model to reconstruct and smooth point clouds that combine L1-median filtering with sparse L1 regularization for both denoising the normal vectors and updating the position of the points to preserve sharp features in the point cloud. The L1-median filter is robust to outliers and noise compared to the mean. The L1 norm is a way to measure the sparsity of a solution, and applying an L1 optimization to the point cloud can measure the sparsity of sharp features, producing clean point set surfaces with sharp features. We optimize the L1 minimization problem by using the proximal gradient descent algorithm. Experimental results show that our approach is comparable to the state-of-the-art methods, as it filters out 3D models with a high level of noise, but keeps their geometric features.


2020 ◽  
Author(s):  
Jan Klosa ◽  
Noah Simon ◽  
Pål O. Westermark ◽  
Volkmar Liebscher ◽  
Dörte Wittenburg

SummaryStatistical analyses of biological problems in life sciences often lead to high-dimensional linear models. To solve the corresponding system of equations, penalisation approaches are often the methods of choice. They are especially useful in case of multicollinearity which appears if the number of explanatory variables exceeds the number of observations or for some biological reason. Then, the model goodness of fit is penalised by some suitable function of interest. Prominent examples are the lasso, group lasso and sparse-group lasso. Here, we offer a fast and numerically cheap implementation of these operators via proximal gradient descent. The grid search for the penalty parameter is realised by warm starts. The step size between consecutive iterations is determined with backtracking line search. Finally, the package produces complete regularisation paths.Availability and implementationseagull is an R package that is freely available on the Comprehensive R Archive Network (CRAN; https://CRAN.R-project.org/package=seagull; vignette included). The source code is available on https://github.com/jklosa/[email protected]


Sign in / Sign up

Export Citation Format

Share Document