Thresholds for the Recovery of Sparse Solutions via L1 Minimization

Author(s):  
David L. Donoho ◽  
Jared Tanner
Author(s):  
Michael Unser

Abstract Regularization addresses the ill-posedness of the training problem in machine learning or the reconstruction of a signal from a limited number of measurements. The method is applicable whenever the problem is formulated as an optimization task. The standard strategy consists in augmenting the original cost functional by an energy that penalizes solutions with undesirable behavior. The effect of regularization is very well understood when the penalty involves a Hilbertian norm. Another popular configuration is the use of an $$\ell _1$$ ℓ 1 -norm (or some variant thereof) that favors sparse solutions. In this paper, we propose a higher-level formulation of regularization within the context of Banach spaces. We present a general representer theorem that characterizes the solutions of a remarkably broad class of optimization problems. We then use our theorem to retrieve a number of known results in the literature such as the celebrated representer theorem of machine leaning for RKHS, Tikhonov regularization, representer theorems for sparsity promoting functionals, the recovery of spikes, as well as a few new ones.


2012 ◽  
Vol 92 (12) ◽  
pp. 3075-3079 ◽  
Author(s):  
Yang You ◽  
Laming Chen ◽  
Yuantao Gu ◽  
Wei Feng ◽  
Hui Dai

Sign in / Sign up

Export Citation Format

Share Document