scholarly journals A mixed ℓ1 regularization approach for sparse simultaneous approximation of parameterized PDEs

2019 ◽  
Vol 53 (6) ◽  
pp. 2025-2045
Author(s):  
Nick Dexter ◽  
Hoang Tran ◽  
Clayton Webster

We present and analyze a novel sparse polynomial technique for the simultaneous approximation of parameterized partial differential equations (PDEs) with deterministic and stochastic inputs. Our approach treats the numerical solution as a jointly sparse reconstruction problem through the reformulation of the standard basis pursuit denoising, where the set of jointly sparse vectors is infinite. To achieve global reconstruction of sparse solutions to parameterized elliptic PDEs over both physical and parametric domains, we combine the standard measurement scheme developed for compressed sensing in the context of bounded orthonormal systems with a novel mixed-norm based ℓ1 regularization method that exploits both energy and sparsity. In addition, we are able to prove that, with minimal sample complexity, error estimates comparable to the best s-term and quasi-optimal approximations are achievable, while requiring only a priori bounds on polynomial truncation error with respect to the energy norm. Finally, we perform extensive numerical experiments on several high-dimensional parameterized elliptic PDE models to demonstrate the superior recovery properties of the proposed approach.

2019 ◽  
Vol 27 (4) ◽  
pp. 575-590 ◽  
Author(s):  
Wei Wang ◽  
Shuai Lu ◽  
Bernd Hofmann ◽  
Jin Cheng

Abstract Measuring the error by an {\ell^{1}} -norm, we analyze under sparsity assumptions an {\ell^{0}} -regularization approach, where the penalty in the Tikhonov functional is complemented by a general stabilizing convex functional. In this context, ill-posed operator equations {Ax=y} with an injective and bounded linear operator A mapping between {\ell^{2}} and a Banach space Y are regularized. For sparse solutions, error estimates as well as linear and sublinear convergence rates are derived based on a variational inequality approach, where the regularization parameter can be chosen either a priori in an appropriate way or a posteriori by the sequential discrepancy principle. To further illustrate the balance between the {\ell^{0}} -term and the complementing convex penalty, the important special case of the {\ell^{2}} -norm square penalty is investigated showing explicit dependence between both terms. Finally, some numerical experiments verify and illustrate the sparsity promoting properties of corresponding regularized solutions.


1972 ◽  
Vol 19 (4) ◽  
pp. 283-302 ◽  
Author(s):  
David A. Field ◽  
William B. Jones

2018 ◽  
Vol 26 (1) ◽  
pp. 85-94 ◽  
Author(s):  
Jens Flemming ◽  
Daniel Gerth

AbstractWe show that the convergence rate of {\ell^{1}}-regularization for linear ill-posed equations is always {{\mathcal{O}}(\delta)} if the exact solution is sparse and if the considered operator is injective and weak*-to-weak continuous. Under the same assumptions convergence rates in case of non-sparse solutions are proven. The results base on the fact that certain source-type conditions used in the literature for proving convergence rates are automatically satisfied.


2014 ◽  
Vol 64 (2) ◽  
pp. 425-455 ◽  
Author(s):  
Gonzalo Rubio ◽  
François Fraysse ◽  
David A. Kopriva ◽  
Eusebio Valero

Sign in / Sign up

Export Citation Format

Share Document