scholarly journals Statistical guarantees for Bayesian uncertainty quantification in nonlinear inverse problems with Gaussian process priors

2021 ◽  
Vol 49 (6) ◽  
Author(s):  
François Monard ◽  
Richard Nickl ◽  
Gabriel P. Paternain
Geophysics ◽  
2019 ◽  
Vol 84 (6) ◽  
pp. M15-M24 ◽  
Author(s):  
Dario Grana ◽  
Leandro Passos de Figueiredo ◽  
Leonardo Azevedo

The prediction of rock properties in the subsurface from geophysical data generally requires the solution of a mathematical inverse problem. Because of the large size of geophysical (seismic) data sets and subsurface models, it is common to reduce the dimension of the problem by applying dimension reduction methods and considering a reparameterization of the model and/or the data. Especially for high-dimensional nonlinear inverse problems, in which the analytical solution of the problem is not available in a closed form and iterative sampling or optimization methods must be applied to approximate the solution, model and/or data reduction reduce the computational cost of the inversion. However, part of the information in the data or in the model can be lost by working in the reduced model and/or data space. We have focused on the uncertainty quantification in the solution of the inverse problem with data and/or model order reduction. We operate in a Bayesian setting for the inversion and uncertainty quantification and validate the proposed approach in the linear case, in which the posterior distribution of the model variables can be analytically written and the uncertainty of the model predictions can be exactly assessed. To quantify the changes in the uncertainty in the inverse problem in the reduced space, we compare the uncertainty in the solution with and without data and/or model reduction. We then extend the approach to nonlinear inverse problems in which the solution is computed using an ensemble-based method. Examples of applications to linearized acoustic and nonlinear elastic inversion allow quantifying the impact of the application of reduction methods to model and data vectors on the uncertainty of inverse problem solutions. Examples of applications to linearized acoustic and nonlinear elastic inversion are shown.


Author(s):  
Kevin de Vries ◽  
Anna Nikishova ◽  
Benjamin Czaja ◽  
Gábor Závodszky ◽  
Alfons G. Hoekstra

Geophysics ◽  
2019 ◽  
Vol 84 (2) ◽  
pp. R251-R269 ◽  
Author(s):  
Bas Peters ◽  
Brendan R. Smithyman ◽  
Felix J. Herrmann

Nonlinear inverse problems are often hampered by local minima because of missing low frequencies and far offsets in the data, lack of access to good starting models, noise, and modeling errors. A well-known approach to counter these deficiencies is to include prior information on the unknown model, which regularizes the inverse problem. Although conventional regularization methods have resulted in enormous progress in ill-posed (geophysical) inverse problems, challenges remain when the prior information consists of multiple pieces. To handle this situation, we have developed an optimization framework that allows us to add multiple pieces of prior information in the form of constraints. The proposed framework is more suitable for full-waveform inversion (FWI) because it offers assurances that multiple constraints are imposed uniquely at each iteration, irrespective of the order in which they are invoked. To project onto the intersection of multiple sets uniquely, we use Dykstra’s algorithm that does not rely on trade-off parameters. In that sense, our approach differs substantially from approaches, such as Tikhonov/penalty regularization and gradient filtering. None of these offer assurances, which makes them less suitable to FWI, where unrealistic intermediate results effectively derail the inversion. By working with intersections of sets, we avoid trade-off parameters and keep objective calculations separate from projections that are often much faster to compute than objectives/gradients in 3D. These features allow for easy integration into existing code bases. Working with constraints also allows for heuristics, where we built up the complexity of the model by a gradual relaxation of the constraints. This strategy helps to avoid convergence to local minima that represent unrealistic models. Using multiple constraints, we obtain better FWI results compared with a quadratic penalty method, whereas all definitions of the constraints are in terms of physical units and follow from the prior knowledge directly.


Sign in / Sign up

Export Citation Format

Share Document