scholarly journals A Unified 2D/3D Large-Scale Software Environment for Nonlinear Inverse Problems

2019 ◽  
Vol 45 (1) ◽  
pp. 1-35 ◽  
Author(s):  
Curt Da Silva ◽  
Felix Herrmann
Author(s):  
E. Skakalina

Modern development of computer technology and the possibility of implementing calculations in parallel allow to solve increasingly large-scale problems of numerical modeling. The development of multiprocessor computing and parallel computing makes it important to solve problems of optimization analysis. The optimization analysis is based on the mass solution of inverse problems when the defining parameters of the considered class of problems change in certain ranges. Thus, calculations of not only direct problems where it is necessary to model the phenomenon at the known initial data, but also calculations of inverse problems where it is necessary to define on what defining parameters there is this or that phenomenon become more and more demanded. This formulation requires multiple solutions of direct problems and solving the problem of optimization analysis and construction of predictive trends. Sets of multidimensional parametric data in the paper are considered as numerical solutions of the optimization problem. The construction of predictive trends is implemented on the basis of the group method of data handling as a direction of induction modeling. The methodology of visualization of results of calculation of parametric functions is realized. The scheme of Data Mining with application of methods of visualization by means of the Matlab software environment is described


2021 ◽  
pp. 104790
Author(s):  
Ettore Biondi ◽  
Guillaume Barnier ◽  
Robert G. Clapp ◽  
Francesco Picetti ◽  
Stuart Farris

2021 ◽  
Vol 47 (2) ◽  
pp. 1-34
Author(s):  
Umberto Villa ◽  
Noemi Petra ◽  
Omar Ghattas

We present an extensible software framework, hIPPYlib, for solution of large-scale deterministic and Bayesian inverse problems governed by partial differential equations (PDEs) with (possibly) infinite-dimensional parameter fields (which are high-dimensional after discretization). hIPPYlib overcomes the prohibitively expensive nature of Bayesian inversion for this class of problems by implementing state-of-the-art scalable algorithms for PDE-based inverse problems that exploit the structure of the underlying operators, notably the Hessian of the log-posterior. The key property of the algorithms implemented in hIPPYlib is that the solution of the inverse problem is computed at a cost, measured in linearized forward PDE solves, that is independent of the parameter dimension. The mean of the posterior is approximated by the MAP point, which is found by minimizing the negative log-posterior with an inexact matrix-free Newton-CG method. The posterior covariance is approximated by the inverse of the Hessian of the negative log posterior evaluated at the MAP point. The construction of the posterior covariance is made tractable by invoking a low-rank approximation of the Hessian of the log-likelihood. Scalable tools for sample generation are also discussed. hIPPYlib makes all of these advanced algorithms easily accessible to domain scientists and provides an environment that expedites the development of new algorithms.


Geophysics ◽  
2019 ◽  
Vol 84 (2) ◽  
pp. R251-R269 ◽  
Author(s):  
Bas Peters ◽  
Brendan R. Smithyman ◽  
Felix J. Herrmann

Nonlinear inverse problems are often hampered by local minima because of missing low frequencies and far offsets in the data, lack of access to good starting models, noise, and modeling errors. A well-known approach to counter these deficiencies is to include prior information on the unknown model, which regularizes the inverse problem. Although conventional regularization methods have resulted in enormous progress in ill-posed (geophysical) inverse problems, challenges remain when the prior information consists of multiple pieces. To handle this situation, we have developed an optimization framework that allows us to add multiple pieces of prior information in the form of constraints. The proposed framework is more suitable for full-waveform inversion (FWI) because it offers assurances that multiple constraints are imposed uniquely at each iteration, irrespective of the order in which they are invoked. To project onto the intersection of multiple sets uniquely, we use Dykstra’s algorithm that does not rely on trade-off parameters. In that sense, our approach differs substantially from approaches, such as Tikhonov/penalty regularization and gradient filtering. None of these offer assurances, which makes them less suitable to FWI, where unrealistic intermediate results effectively derail the inversion. By working with intersections of sets, we avoid trade-off parameters and keep objective calculations separate from projections that are often much faster to compute than objectives/gradients in 3D. These features allow for easy integration into existing code bases. Working with constraints also allows for heuristics, where we built up the complexity of the model by a gradual relaxation of the constraints. This strategy helps to avoid convergence to local minima that represent unrealistic models. Using multiple constraints, we obtain better FWI results compared with a quadratic penalty method, whereas all definitions of the constraints are in terms of physical units and follow from the prior knowledge directly.


Sign in / Sign up

Export Citation Format

Share Document