scholarly journals Optimization of compound regularization parameters based on Stein's unbiased risk estimate

Author(s):  
Feng Xue ◽  
Hanjie Pan ◽  
Runhui Wu ◽  
Xin Liu ◽  
Jiaqi Liu
2021 ◽  
Author(s):  
Mingli Wang ◽  
Xinwei Jiang ◽  
Junbin Gao ◽  
Tianjiang Wang ◽  
Chunlong Hu ◽  
...  

2011 ◽  
Vol 128-129 ◽  
pp. 500-503
Author(s):  
Tian Jie Cao

In this paper an adaptive method of shrinkage of the wavelet coefficients is presented. In the method, the wavelet coefficients are divided into two classes by a threshold. One class of them with the smaller absolute values at a scale is transformed with a proportional relation,another class with the larger absolute values at the same scale is transformed with a linear function. The threshold and the coefficient in the proportional relation or in the linear function are determined by the principle of minimizing the Stein’s unbiased risk estimate. In the paper, the method of estimation of the threshold and the coefficient is given and the adaptive method of shrinkage of the wavelet coefficients is applied to image denoising. Examples in the paper show that the presented method has an advantage over SureShrink from the point of view of both the Stein’s unbiased risk estimate and the signal-to-noise ratio. In addition, the method takes almost the same computing time as the SureShrink in image denoising.


2021 ◽  
pp. 1-30
Author(s):  
Jaume Vives-i-Bastida

This paper derives asymptotic risk (expected loss) results for shrinkage estimators with multidimensional regularization in high-dimensional settings. We introduce a class of multidimensional shrinkage estimators (MuSEs), which includes the elastic net, and show that—as the number of parameters to estimate grows—the empirical loss converges to the oracle-optimal risk. This result holds when the regularization parameters are estimated empirically via cross-validation or Stein’s unbiased risk estimate. To help guide applied researchers in their choice of estimator, we compare the empirical Bayes risk of the lasso, ridge, and elastic net in a spike and normal setting. Of the three estimators, we find that the elastic net performs best when the data are moderately sparse and the lasso performs best when the data are highly sparse. Our analysis suggests that applied researchers who are unsure about the level of sparsity in their data might benefit from using MuSEs such as the elastic net. We exploit these insights to propose a new estimator, the cubic net, and demonstrate through simulations that it outperforms the three other estimators for any sparsity level.


Sign in / Sign up

Export Citation Format

Share Document