scholarly journals Non-asymptotic error estimates for the Laplace approximation in Bayesian inverse problems

Author(s):  
Tapio Helin ◽  
Remo Kretschmann

AbstractIn this paper we study properties of the Laplace approximation of the posterior distribution arising in nonlinear Bayesian inverse problems. Our work is motivated by Schillings et al. (Numer Math 145:915–971, 2020. 10.1007/s00211-020-01131-1), where it is shown that in such a setting the Laplace approximation error in Hellinger distance converges to zero in the order of the noise level. Here, we prove novel error estimates for a given noise level that also quantify the effect due to the nonlinearity of the forward mapping and the dimension of the problem. In particular, we are interested in settings in which a linear forward mapping is perturbed by a small nonlinear mapping. Our results indicate that in this case, the Laplace approximation error is of the size of the perturbation. The paper provides insight into Bayesian inference in nonlinear inverse problems, where linearization of the forward mapping has suitable approximation properties.

2021 ◽  
Vol 427 ◽  
pp. 110055
Author(s):  
Aaron Myers ◽  
Alexandre H. Thiéry ◽  
Kainan Wang ◽  
Tan Bui-Thanh

2021 ◽  
Vol 47 (2) ◽  
pp. 1-34
Author(s):  
Umberto Villa ◽  
Noemi Petra ◽  
Omar Ghattas

We present an extensible software framework, hIPPYlib, for solution of large-scale deterministic and Bayesian inverse problems governed by partial differential equations (PDEs) with (possibly) infinite-dimensional parameter fields (which are high-dimensional after discretization). hIPPYlib overcomes the prohibitively expensive nature of Bayesian inversion for this class of problems by implementing state-of-the-art scalable algorithms for PDE-based inverse problems that exploit the structure of the underlying operators, notably the Hessian of the log-posterior. The key property of the algorithms implemented in hIPPYlib is that the solution of the inverse problem is computed at a cost, measured in linearized forward PDE solves, that is independent of the parameter dimension. The mean of the posterior is approximated by the MAP point, which is found by minimizing the negative log-posterior with an inexact matrix-free Newton-CG method. The posterior covariance is approximated by the inverse of the Hessian of the negative log posterior evaluated at the MAP point. The construction of the posterior covariance is made tractable by invoking a low-rank approximation of the Hessian of the log-likelihood. Scalable tools for sample generation are also discussed. hIPPYlib makes all of these advanced algorithms easily accessible to domain scientists and provides an environment that expedites the development of new algorithms.


2017 ◽  
Vol 49 (4) ◽  
pp. 1067-1090 ◽  
Author(s):  
Nicolás García Trillos ◽  
Dejan Slepčev ◽  
James von Brecht

Abstract We investigate the estimation of the perimeter of a set by a graph cut of a random geometric graph. For Ω ⊆ D = (0, 1)d with d ≥ 2, we are given n random independent and identically distributed points on D whose membership in Ω is known. We consider the sample as a random geometric graph with connection distance ε > 0. We estimate the perimeter of Ω (relative to D) by the, appropriately rescaled, graph cut between the vertices in Ω and the vertices in D ∖ Ω. We obtain bias and variance estimates on the error, which are optimal in scaling with respect to n and ε. We consider two scaling regimes: the dense (when the average degree of the vertices goes to ∞) and the sparse one (when the degree goes to 0). In the dense regime, there is a crossover in the nature of the approximation at dimension d = 5: we show that in low dimensions d = 2, 3, 4 one can obtain confidence intervals for the approximation error, while in higher dimensions one can obtain only error estimates for testing the hypothesis that the perimeter is less than a given number.


Sign in / Sign up

Export Citation Format

Share Document