Effects of model discretization on the statistics of model parameters in nonlinear inverse problems

2007 ◽  
Vol 121 (5) ◽  
pp. 3125-3125
Author(s):  
Andrew A. Ganse ◽  
Robert I. Odom ◽  
Andrew A. Ganse ◽  
Robert I. Odom
Geophysics ◽  
2011 ◽  
Vol 76 (2) ◽  
pp. E45-E58 ◽  
Author(s):  
Mohammad S. Shahraeeni ◽  
Andrew Curtis

We have developed an extension of the mixture-density neural network as a computationally efficient probabilistic method to solve nonlinear inverse problems. In this method, any postinversion (a posteriori) joint probability density function (PDF) over the model parameters is represented by a weighted sum of multivariate Gaussian PDFs. A mixture-density neural network estimates the weights, mean vector, and covariance matrix of the Gaussians given any measured data set. In one study, we have jointly inverted compressional- and shear-wave velocity for the joint PDF of porosity, clay content, and water saturation in a synthetic, fluid-saturated, dispersed sand-shale system. Results show that if the method is applied appropriately, the joint PDF estimated by the neural network is comparable to the Monte Carlo sampled a posteriori solution of the inverse problem. However, the computational cost of training and using the neural network is much lower than inversion by sampling (more than a factor of 104 in this case and potentially a much larger factor for 3D seismic inversion). To analyze the performance of the method on real exploration geophysical data, we have jointly inverted P-wave impedance and Poisson’s ratio logs for the joint PDF of porosity and clay content. Results show that the posterior model PDF of porosity and clay content is a good estimate of actual porosity and clay-content log values. Although the results may vary from one field to another, this fast, probabilistic method of solving nonlinear inverse problems can be applied to invert well logs and large seismic data sets for petrophysical parameters in any field.


Geophysics ◽  
2019 ◽  
Vol 84 (2) ◽  
pp. R251-R269 ◽  
Author(s):  
Bas Peters ◽  
Brendan R. Smithyman ◽  
Felix J. Herrmann

Nonlinear inverse problems are often hampered by local minima because of missing low frequencies and far offsets in the data, lack of access to good starting models, noise, and modeling errors. A well-known approach to counter these deficiencies is to include prior information on the unknown model, which regularizes the inverse problem. Although conventional regularization methods have resulted in enormous progress in ill-posed (geophysical) inverse problems, challenges remain when the prior information consists of multiple pieces. To handle this situation, we have developed an optimization framework that allows us to add multiple pieces of prior information in the form of constraints. The proposed framework is more suitable for full-waveform inversion (FWI) because it offers assurances that multiple constraints are imposed uniquely at each iteration, irrespective of the order in which they are invoked. To project onto the intersection of multiple sets uniquely, we use Dykstra’s algorithm that does not rely on trade-off parameters. In that sense, our approach differs substantially from approaches, such as Tikhonov/penalty regularization and gradient filtering. None of these offer assurances, which makes them less suitable to FWI, where unrealistic intermediate results effectively derail the inversion. By working with intersections of sets, we avoid trade-off parameters and keep objective calculations separate from projections that are often much faster to compute than objectives/gradients in 3D. These features allow for easy integration into existing code bases. Working with constraints also allows for heuristics, where we built up the complexity of the model by a gradual relaxation of the constraints. This strategy helps to avoid convergence to local minima that represent unrealistic models. Using multiple constraints, we obtain better FWI results compared with a quadratic penalty method, whereas all definitions of the constraints are in terms of physical units and follow from the prior knowledge directly.


2012 ◽  
Vol 58 (210) ◽  
pp. 795-808 ◽  
Author(s):  
Marijke Habermann ◽  
David Maxwell ◽  
Martin Truffer

AbstractInverse problems are used to estimate model parameters from observations. Many inverse problems are ill-posed because they lack stability, meaning it is not possible to find solutions that are stable with respect to small changes in input data. Regularization techniques are necessary to stabilize the problem. For nonlinear inverse problems, iterative inverse methods can be used as a regularization method. These methods start with an initial estimate of the model parameters, update the parameters to match observation in an iterative process that adjusts large-scale spatial features first, and use a stopping criterion to prevent the overfitting of data. This criterion determines the smoothness of the solution and thus the degree of regularization. Here, iterative inverse methods are implemented for the specific problem of reconstructing basal stickiness of an ice sheet by using the shallow-shelf approximation as a forward model and synthetically derived surface velocities as input data. The incomplete Gauss-Newton (IGN) method is introduced and compared to the commonly used steepest descent and nonlinear conjugate gradient methods. Two different stopping criteria, the discrepancy principle and a recent- improvement threshold, are compared. The IGN method is favored because it is rapidly converging, and it incorporates the discrepancy principle, which leads to optimally resolved solutions.


Sign in / Sign up

Export Citation Format

Share Document