randomized maximum likelihood
Recently Published Documents


TOTAL DOCUMENTS

20
(FIVE YEARS 7)

H-INDEX

7
(FIVE YEARS 1)

Author(s):  
Yuming Ba ◽  
Jana de Wiljes ◽  
Dean S. Oliver ◽  
Sebastian Reich

AbstractMinimization of a stochastic cost function is commonly used for approximate sampling in high-dimensional Bayesian inverse problems with Gaussian prior distributions and multimodal posterior distributions. The density of the samples generated by minimization is not the desired target density, unless the observation operator is linear, but the distribution of samples is useful as a proposal density for importance sampling or for Markov chain Monte Carlo methods. In this paper, we focus on applications to sampling from multimodal posterior distributions in high dimensions. We first show that sampling from multimodal distributions is improved by computing all critical points instead of only minimizers of the objective function. For applications to high-dimensional geoscience inverse problems, we demonstrate an efficient approximate weighting that uses a low-rank Gauss-Newton approximation of the determinant of the Jacobian. The method is applied to two toy problems with known posterior distributions and a Darcy flow problem with multiple modes in the posterior.


2021 ◽  
Author(s):  
A. T. Barker ◽  
C. S. Lee ◽  
F. Forouzanfar ◽  
A. Guion ◽  
X.-H. Wu

Abstract We explore the problem of drawing posterior samples from a lognormal permeability field conditioned by noisy measurements at discrete locations. The underlying unconditioned samples are based on a scalable PDE-sampling technique that shows better scalability for large problems than the traditional Karhunen-Loeve sampling, while still allowing for consistent samples to be drawn on a hierarchy of spatial scales. Lognormal random fields produced in this scalable and hierarchical way are then conditioned to measured data by a randomized maximum likelihood approach to draw from a Bayesian posterior distribution. The algorithm to draw from the posterior distribution can be shown to be equivalent to a PDE-constrained optimization problem, which allows for some efficient computational solution techniques. Numerical results demonstrate the efficiency of the proposed methods. In particular, we are able to match statistics for a simple flow problem on the fine grid with high accuracy and at much lower cost on a scale of coarser grids.


SPE Journal ◽  
2021 ◽  
Vol 26 (02) ◽  
pp. 1011-1031
Author(s):  
Gilson Moura Silva Neto ◽  
Ricardo Vasconcellos Soares ◽  
Geir Evensen ◽  
Alessandra Davolio ◽  
Denis José Schiozer

Summary Time-lapse-seismic-data assimilation has been drawing the reservoir-engineering community's attention over the past few years. One of the advantages of including this kind of data to improve the reservoir-flow models is that it provides complementary information compared with the wells' production data. Ensemble-based methods are some of the standard tools used to calibrate reservoir models using time-lapse seismic data. One of the drawbacks of assimilating time-lapse seismic data involves the large data sets, mainly for large reservoir models. This situation leads to high-dimensional problems that demand significant computational resources to process and store the matrices when using conventional and straightforward methods. Another known issue associated with the ensemble-based methods is the limited ensemble sizes, which cause spurious correlations between the data and the parameters and limit the degrees of freedom. In this work, we propose a data-assimilation scheme using an efficient implementation of the subspace ensemble randomized maximum likelihood (SEnRML) method with local analysis. This method reduces the computational requirements for assimilating large data sets because the number of operations scales linearly with the number of observed data points. Furthermore, by implementing it with local analysis, we reduce the memory requirements at each update step and mitigate the effects of the limited ensemble sizes. We test two local analysis approaches: one distance-based approach and one correlation-based approach. We apply these implementations to two synthetic time-lapse-seismic-data-assimilation cases, one 2D example, and one field-scale application that mimics some of the real-field challenges. We compare the results with reference solutions and with the known ensemble smoother with multiple data assimilation (ES-MDA) using Kalman gain distance-based localization. The results show that our method can efficiently assimilate time-lapse seismic data, leading to updated models that are comparable with other straightforward methods. The correlation-based local analysis approach provided results similar to the distance-based approach, with the advantage that the former can be applied to data and parameters that do not have specific spatial positions.


2019 ◽  
Vol 26 (3) ◽  
pp. 325-338 ◽  
Author(s):  
Patrick Nima Raanes ◽  
Andreas Størksen Stordal ◽  
Geir Evensen

Abstract. Ensemble randomized maximum likelihood (EnRML) is an iterative (stochastic) ensemble smoother, used for large and nonlinear inverse problems, such as history matching and data assimilation. Its current formulation is overly complicated and has issues with computational costs, noise, and covariance localization, even causing some practitioners to omit crucial prior information. This paper resolves these difficulties and streamlines the algorithm without changing its output. These simplifications are achieved through the careful treatment of the linearizations and subspaces. For example, it is shown (a) how ensemble linearizations relate to average sensitivity and (b) that the ensemble does not lose rank during updates. The paper also draws significantly on the theory of the (deterministic) iterative ensemble Kalman smoother (IEnKS). Comparative benchmarks are obtained with the Lorenz 96 model with these two smoothers and the ensemble smoother using multiple data assimilation (ES-MDA).


2019 ◽  
Author(s):  
Patrick N. Raanes ◽  
Andreas S. Stordal ◽  
Geir Evensen

Abstract. Ensemble randomized maximum likelihood (EnRML) is an iterative (stochastic) ensemble smoother, used for large and nonlinear inverse problems, such as history matching and data assimilation. Its current formulation is overly complicated and has issues with computational costs, noise, and covariance localization, even causing some practitioners to omit crucial prior information. This paper resolves these difficulties and streamlines the algorithm, without changing its output. These simplifications are achieved through the careful treatment of the linearizations and subspaces. For example, it is shown (a) how ensemble linearizations relate to average sensitivity, and (b) that the ensemble does not loose rank during updates. The paper also draws significantly on the theory of the (deterministic) iterative ensemble Kalman smoother (IEnKS). Comparative benchmarks are obtained with the Lorenz-96 model with these two smoothers and the ensemble smoother using multiple data assimilation (ES-MDA).


SPE Journal ◽  
2018 ◽  
Vol 23 (05) ◽  
pp. 1496-1517 ◽  
Author(s):  
Chaohui Chen ◽  
Guohua Gao ◽  
Ruijian Li ◽  
Richard Cao ◽  
Tianhong Chen ◽  
...  

Summary Although it is possible to apply traditional optimization algorithms together with the randomized-maximum-likelihood (RML) method to generate multiple conditional realizations, the computation cost is high. This paper presents a novel method to enhance the global-search capability of the distributed-Gauss-Newton (DGN) optimization method and integrates it with the RML method to generate multiple realizations conditioned to production data synchronously. RML generates samples from an approximate posterior by minimizing a large ensemble of perturbed objective functions in which the observed data and prior mean values of uncertain model parameters have been perturbed with Gaussian noise. Rather than performing these minimizations in isolation using large sets of simulations to evaluate the finite-difference approximations of the gradients used to optimize each perturbed realization, we use a concurrent implementation in which simulation results are shared among different minimization tasks whenever these results are helping to converge to the global minimum of a specific minimization task. To improve sharing of results, we relax the accuracy of the finite-difference approximations for the gradients with more widely spaced simulation results. To avoid trapping in local optima, a novel method to enhance the global-search capability of the DGN algorithm is developed and integrated seamlessly with the RML formulation. In this way, we can improve the quality of RML conditional realizations that sample the approximate posterior. The proposed work flow is first validated with a toy problem and then applied to a real-field unconventional asset. Numerical results indicate that the new method is very efficient compared with traditional methods. Hundreds of data-conditioned realizations can be generated in parallel within 20 to 40 iterations. The computational cost (central-processing-unit usage) is reduced significantly compared with the traditional RML approach. The real-field case studies involve a history-matching study to generate history-matched realizations with the proposed method and an uncertainty quantification of production forecasting using those conditioned models. All conditioned models generate production forecasts that are consistent with real-production data in both the history-matching period and the blind-test period. Therefore, the new approach can enhance the confidence level of the estimated-ultimate-recovery (EUR) assessment using production-forecasting results generated from all conditional realizations, resulting in significant business impact.


Sign in / Sign up

Export Citation Format

Share Document