scholarly journals A Semismooth Newton Algorithm for High-Dimensional Nonconvex Sparse Learning

2020 ◽  
Vol 31 (8) ◽  
pp. 2993-3006 ◽  
Author(s):  
Yueyong Shi ◽  
Jian Huang ◽  
Yuling Jiao ◽  
Qinglong Yang
2021 ◽  
pp. 108432
Author(s):  
Jian Huang ◽  
Yuling Jiao ◽  
Xiliang Lu ◽  
Yueyong Shi ◽  
Qinglong Yang ◽  
...  

Algorithms ◽  
2020 ◽  
Vol 13 (11) ◽  
pp. 296
Author(s):  
Miguel del Alamo ◽  
Housen Li ◽  
Axel Munk ◽  
Frank Werner

Many modern statistically efficient methods come with tremendous computational challenges, often leading to large-scale optimisation problems. In this work, we examine such computational issues for recently developed estimation methods in nonparametric regression with a specific view on image denoising. We consider in particular certain variational multiscale estimators which are statistically optimal in minimax sense, yet computationally intensive. Such an estimator is computed as the minimiser of a smoothness functional (e.g., TV norm) over the class of all estimators such that none of its coefficients with respect to a given multiscale dictionary is statistically significant. The so obtained multiscale Nemirowski-Dantzig estimator (MIND) can incorporate any convex smoothness functional and combine it with a proper dictionary including wavelets, curvelets and shearlets. The computation of MIND in general requires to solve a high-dimensional constrained convex optimisation problem with a specific structure of the constraints induced by the statistical multiscale testing criterion. To solve this explicitly, we discuss three different algorithmic approaches: the Chambolle-Pock, ADMM and semismooth Newton algorithms. Algorithmic details and an explicit implementation is presented and the solutions are then compared numerically in a simulation study and on various test images. We thereby recommend the Chambolle-Pock algorithm in most cases for its fast convergence. We stress that our analysis can also be transferred to signal recovery and other denoising problems to recover more general objects whenever it is possible to borrow statistical strength from data patches of similar object structure.


2018 ◽  
Vol 29 (12) ◽  
pp. 6264-6275 ◽  
Author(s):  
Dejun Chu ◽  
Rui Lu ◽  
Jin Li ◽  
Xintong Yu ◽  
Changshui Zhang ◽  
...  

2020 ◽  
Vol 34 (04) ◽  
pp. 6235-6242
Author(s):  
Lingxiao Wang ◽  
Quanquan Gu

We study the problem of estimating high dimensional models with underlying sparse structures while preserving the privacy of each training example. We develop a differentially private high-dimensional sparse learning framework using the idea of knowledge transfer. More specifically, we propose to distill the knowledge from a “teacher” estimator trained on a private dataset, by creating a new dataset from auxiliary features, and then train a differentially private “student” estimator using this new dataset. In addition, we establish the linear convergence rate as well as the utility guarantee for our proposed method. For sparse linear regression and sparse logistic regression, our method achieves improved utility guarantees compared with the best known results (Kifer, Smith and Thakurta 2012; Wang and Gu 2019). We further demonstrate the superiority of our framework through both synthetic and real-world data experiments.


Sign in / Sign up

Export Citation Format

Share Document