A two-step inversion approach for seismic-reservoir characterization and a comparison with a single-loop Markov-chain Monte Carlo algorithm

Geophysics ◽  
2018 ◽  
Vol 83 (3) ◽  
pp. R227-R244 ◽  
Author(s):  
Mattia Aleardi ◽  
Fabio Ciabarri ◽  
Timur Gukov

We have evaluated a two-step Bayesian algorithm for seismic-reservoir characterization, which, thanks to some simplifying assumptions, is computationally very efficient. The applicability and reliability of this method are assessed by comparison with a more sophisticated and computer-intensive Markov-chain Monte Carlo (MCMC) algorithm, which in a single loop directly estimates petrophysical properties and lithofluid facies from prestack data. The two-step method first combines a linear rock-physics model (RPM) with the analytical solution of a linearized amplitude versus angle (AVA) inversion, to directly estimate the petrophysical properties, and related uncertainties, from prestack data under the assumptions of a Gaussian prior model and weak elastic contrasts at the reflecting interface. In particular, we use an empirical, linear RPM, properly calibrated for the investigated area, to reparameterize the linear time-continuous P-wave reflectivity equation in terms of petrophysical contrasts instead of elastic constants. In the second step, a downward 1D Markov-chain prior model is used to infer the lithofluid classes from the outcomes of the first step. The single-loop (SL) MCMC algorithm uses a convolutional forward modeling based on the exact Zoeppritz equations, and it adopts a nonlinear RPM. Moreover, it assumes a more realistic Gaussian mixture distribution for the petrophysical properties. Both approaches are applied on an onshore 3D seismic data set for the characterization of a gas-bearing, clastic reservoir. Notwithstanding the differences in the forward-model parameterization, in the considered RPM, and in the assumed a priori probability density functions, the two methods yield maximum a posteriori solutions that are consistent with well-log data, although the Gaussian mixture assumption adopted by the SL method slightly improves the description of the multimodal behavior of the petrophysical parameters. However, in the considered reservoir, the main difference between the two approaches remains the very different computational times, the SL method being much more computationally intensive than the two-step approach.

SPE Journal ◽  
2019 ◽  
Vol 25 (01) ◽  
pp. 001-036 ◽  
Author(s):  
Xin Li ◽  
Albert C. Reynolds

Summary Generating an estimate of uncertainty in production forecasts has become nearly standard in the oil industry, but is often performed with procedures that yield at best a highly approximate uncertainty quantification. Formally, the uncertainty quantification of a production forecast can be achieved by generating a correct characterization of the posterior probability-density function (PDF) of reservoir-model parameters conditional to dynamic data and then sampling this PDF correctly. Although Markov-chain Monte Carlo (MCMC) provides a theoretically rigorous method for sampling any target PDF that is known up to a normalizing constant, in reservoir-engineering applications, researchers have found that it might require extraordinarily long chains containing millions to hundreds of millions of states to obtain a correct characterization of the target PDF. When the target PDF has a single mode or has multiple modes concentrated in a small region, it might be possible to implement a proposal distribution dependent on a random walk so that the resulting MCMC algorithm derived from the Metropolis-Hastings acceptance probability can yield a good characterization of the posterior PDF with a computationally feasible chain length. However, for a high-dimensional multimodal PDF with modes separated by large regions of low or zero probability, characterizing the PDF with MCMC using a random walk is not computationally feasible. Although methods such as population MCMC exist for characterizing a multimodal PDF, their computational cost generally makes the application of these algorithms far too costly for field application. In this paper, we design a new proposal distribution using a Gaussian mixture PDF for use in MCMC where the posterior PDF can be multimodal with the modes spread far apart. Simply put, the method generates modes using a gradient-based optimization method and constructs a Gaussian mixture model (GMM) to use as the basic proposal distribution. Tests on three simple problems are presented to establish the validity of the method. The performance of the new MCMC algorithm is compared with that of random-walk MCMC and is also compared with that of population MCMC for a target PDF that is multimodal.


Author(s):  
N. Thompson Hobbs ◽  
Mevin B. Hooten

This chapter explains how to implement Bayesian analyses using the Markov chain Monte Carlo (MCMC) algorithm, a set of methods for Bayesian analysis made popular by the seminal paper of Gelfand and Smith (1990). It begins with an explanation of MCMC with a heuristic, high-level treatment of the algorithm, describing its operation in simple terms with a minimum of formalism. In this first part, the chapter explains the algorithm so that all readers can gain an intuitive understanding of how to find the posterior distribution by sampling from it. Next, the chapter offers a somewhat more formal treatment of how MCMC is implemented mathematically. Finally, this chapter discusses implementation of Bayesian models via two routes—by using software and by writing one's own algorithm.


Author(s):  
Yasushi Ota ◽  
Yu Jiang

This paper investigates the inverse option problems (IOP) in the extended Black--Scholes model arising in financial markets. We identify the volatility and the drift coefficient from the measured data in financial markets using a Bayesian inference approach, which is presented as an IOP solution. The posterior probability density function of the parameters is computed from the measured data. The statistics of the unknown parameters are estimated by a Markov Chain Monte Carlo (MCMC) algorithm, which exploits the posterior state space. The efficient sampling strategy of the MCMC algorithm enables us to solve inverse problems by the Bayesian inference technique. Our numerical results indicate that the Bayesian inference approach can simultaneously estimate the unknown trend and volatility coefficients from the measured data.


Geophysics ◽  
2019 ◽  
Vol 84 (6) ◽  
pp. R1003-R1020 ◽  
Author(s):  
Georgia K. Stuart ◽  
Susan E. Minkoff ◽  
Felipe Pereira

Bayesian methods for full-waveform inversion allow quantification of uncertainty in the solution, including determination of interval estimates and posterior distributions of the model unknowns. Markov chain Monte Carlo (MCMC) methods produce posterior distributions subject to fewer assumptions, such as normality, than deterministic Bayesian methods. However, MCMC is computationally a very expensive process that requires repeated solution of the wave equation for different velocity samples. Ultimately, a large proportion of these samples (often 40%–90%) is rejected. We have evaluated a two-stage MCMC algorithm that uses a coarse-grid filter to quickly reject unacceptable velocity proposals, thereby reducing the computational expense of solving the velocity inversion problem and quantifying uncertainty. Our filter stage uses operator upscaling, which provides near-perfect speedup in parallel with essentially no communication between processes and produces data that are highly correlated with those obtained from the full fine-grid solution. Four numerical experiments demonstrate the efficiency and accuracy of the method. The two-stage MCMC algorithm produce the same results (i.e., posterior distributions and uncertainty information, such as medians and highest posterior density intervals) as the Metropolis-Hastings MCMC. Thus, no information needed for uncertainty quantification is compromised when replacing the one-stage MCMC with the more computationally efficient two-stage MCMC. In four representative experiments, the two-stage method reduces the time spent on rejected models by one-third to one-half, which is important because most of models tried during the course of the MCMC algorithm are rejected. Furthermore, the two-stage MCMC algorithm substantially reduced the overall time-per-trial by as much as 40%, while increasing the acceptance rate from 9% to 90%.


2004 ◽  
Vol 29 (4) ◽  
pp. 461-488 ◽  
Author(s):  
Sandip Sinharay

There is an increasing use of Markov chain Monte Carlo (MCMC) algorithms for fitting statistical models in psychometrics, especially in situations where the traditional estimation techniques are very difficult to apply. One of the disadvantages of using an MCMC algorithm is that it is not straightforward to determine the convergence of the algorithm. Using the output of an MCMC algorithm that has not converged may lead to incorrect inferences on the problem at hand. The convergence is not one to a point, but that of the distribution of a sequence of generated values to another distribution, and hence is not easy to assess; there is no guaranteed diagnostic tool to determine convergence of an MCMC algorithm in general. This article examines the convergence of MCMC algorithms using a number of convergence diagnostics for two real data examples from psychometrics. Findings from this research have the potential to be useful to researchers using the algorithms. For both the examples, the number of iterations required (suggested by the diagnostics) to be reasonably confident that the MCMC algorithm has converged may be larger than what many practitioners consider to be safe.


2014 ◽  
Vol 51 (4) ◽  
pp. 1189-1195 ◽  
Author(s):  
Krzysztof Łatuszyński ◽  
Jeffrey S. Rosenthal

This short note investigates convergence of adaptive Markov chain Monte Carlo algorithms, i.e. algorithms which modify the Markov chain update probabilities on the fly. We focus on the containment condition introduced Roberts and Rosenthal (2007). We show that if the containment condition is not satisfied, then the algorithm will perform very poorly. Specifically, with positive probability, the adaptive algorithm will be asymptotically less efficient then any nonadaptive ergodic MCMC algorithm. We call such algorithms AdapFail, and conclude that they should not be used.


Sign in / Sign up

Export Citation Format

Share Document