scholarly journals Elastic-net regularization in learning theory

2009 ◽  
Vol 25 (2) ◽  
pp. 201-230 ◽  
Author(s):  
Christine De Mol ◽  
Ernesto De Vito ◽  
Lorenzo Rosasco
2016 ◽  
Vol 28 (3) ◽  
pp. 525-562 ◽  
Author(s):  
Yunlong Feng ◽  
Shao-Gao Lv ◽  
Hanyuan Hang ◽  
Johan A. K. Suykens

Kernelized elastic net regularization (KENReg) is a kernelization of the well-known elastic net regularization (Zou & Hastie, 2005 ). The kernel in KENReg is not required to be a Mercer kernel since it learns from a kernelized dictionary in the coefficient space. Feng, Yang, Zhao, Lv, and Suykens ( 2014 ) showed that KENReg has some nice properties including stability, sparseness, and generalization. In this letter, we continue our study on KENReg by conducting a refined learning theory analysis. This letter makes the following three main contributions. First, we present refined error analysis on the generalization performance of KENReg. The main difficulty of analyzing the generalization error of KENReg lies in characterizing the population version of its empirical target function. We overcome this by introducing a weighted Banach space associated with the elastic net regularization. We are then able to conduct elaborated learning theory analysis and obtain fast convergence rates under proper complexity and regularity assumptions. Second, we study the sparse recovery problem in KENReg with fixed design and show that the kernelization may improve the sparse recovery ability compared to the classical elastic net regularization. Finally, we discuss the interplay among different properties of KENReg that include sparseness, stability, and generalization. We show that the stability of KENReg leads to generalization, and its sparseness confidence can be derived from generalization. Moreover, KENReg is stable and can be simultaneously sparse, which makes it attractive theoretically and practically.


Minerals ◽  
2019 ◽  
Vol 9 (7) ◽  
pp. 407 ◽  
Author(s):  
Rongzhe Zhang ◽  
Tonglin Li ◽  
Shuai Zhou ◽  
Xinhui Deng

We present a joint 2D inversion approach for magnetotelluric (MT) and gravity data with elastic-net regularization and cross-gradient constraints. We describe the main features of the approach and verify the inversion results against a synthetic model. The results indicate that the best fit solution using the L2 is overly smooth, while the best fit solution for the L1 norm is too sparse. However, the elastic-net regularization method, a convex combination term of L2 norm and L1 norm, can not only enforce the stability to preserve local smoothness, but can also enforce the sparsity to preserve sharp boundaries. Cross-gradient constraints lead to models with close structural resemblance and improve the estimates of the resistivity and density of the synthetic dataset. We apply the novel approach to field datasets from a copper mining area in the northeast of China. Our results show that the method can generate much more detail and a sharper boundary as well as better depth resolution. Relative to the existing solution, the large area divergence phenomenon under the anomalous bodies is eliminated, and the fine anomalous bodies boundary appeared in the smooth region. This method can provide important technical support for detecting deep concealed deposits.


Sign in / Sign up

Export Citation Format

Share Document