An Adaptive Ensemble Smoother with Multiple Data Assimilation for Assisted History Matching

Author(s):  
Duc H. Le ◽  
Alexandre A. Emerick ◽  
Albert C. Reynolds
SPE Journal ◽  
2016 ◽  
Vol 21 (06) ◽  
pp. 2195-2207 ◽  
Author(s):  
Duc H. Le ◽  
Alexandre A. Emerick ◽  
Albert C. Reynolds

Summary Recently, Emerick and Reynolds (2012) introduced the ensemble smoother with multiple data assimilations (ES-MDA) for assisted history matching. With computational examples, they demonstrated that ES-MDA provides both a better data match and a better quantification of uncertainty than is obtained with the ensemble Kalman filter (EnKF). However, similar to EnKF, ES-MDA can experience near ensemble collapse and results in too many extreme values of rock-property fields for complex problems. These negative effects can be avoided by a judicious choice of the ES-MDA inflation factors, but, before this work, the optimal inflation factors could only be determined by trial and error. Here, we provide two automatic procedures for choosing the inflation factor for the next data-assimilation step adaptively as the history match proceeds. Both methods are motivated by knowledge of regularization procedures—the first is intuitive and heuristical; the second is motivated by existing theory on the regularization of least-squares inverse problems. We illustrate that the adaptive ES-MDA algorithms are superior to the original ES-MDA algorithm by history matching three-phase-flow production data for a complicated synthetic problem in which the reservoir-model parameters include the porosity, horizontal and vertical permeability fields, depths of the initial fluid contacts, and the parameters of power-law permeability curves.


Energies ◽  
2021 ◽  
Vol 14 (11) ◽  
pp. 3137
Author(s):  
Amine Tadjer ◽  
Reider B. Bratvold ◽  
Remus G. Hanea

Production forecasting is the basis for decision making in the oil and gas industry, and can be quite challenging, especially in terms of complex geological modeling of the subsurface. To help solve this problem, assisted history matching built on ensemble-based analysis such as the ensemble smoother and ensemble Kalman filter is useful in estimating models that preserve geological realism and have predictive capabilities. These methods tend, however, to be computationally demanding, as they require a large ensemble size for stable convergence. In this paper, we propose a novel method of uncertainty quantification and reservoir model calibration with much-reduced computation time. This approach is based on a sequential combination of nonlinear dimensionality reduction techniques: t-distributed stochastic neighbor embedding or the Gaussian process latent variable model and clustering K-means, along with the data assimilation method ensemble smoother with multiple data assimilation. The cluster analysis with t-distributed stochastic neighbor embedding and Gaussian process latent variable model is used to reduce the number of initial geostatistical realizations and select a set of optimal reservoir models that have similar production performance to the reference model. We then apply ensemble smoother with multiple data assimilation for providing reliable assimilation results. Experimental results based on the Brugge field case data verify the efficiency of the proposed approach.


2019 ◽  
Author(s):  
Patrick N. Raanes ◽  
Andreas S. Stordal ◽  
Geir Evensen

Abstract. Ensemble randomized maximum likelihood (EnRML) is an iterative (stochastic) ensemble smoother, used for large and nonlinear inverse problems, such as history matching and data assimilation. Its current formulation is overly complicated and has issues with computational costs, noise, and covariance localization, even causing some practitioners to omit crucial prior information. This paper resolves these difficulties and streamlines the algorithm, without changing its output. These simplifications are achieved through the careful treatment of the linearizations and subspaces. For example, it is shown (a) how ensemble linearizations relate to average sensitivity, and (b) that the ensemble does not loose rank during updates. The paper also draws significantly on the theory of the (deterministic) iterative ensemble Kalman smoother (IEnKS). Comparative benchmarks are obtained with the Lorenz-96 model with these two smoothers and the ensemble smoother using multiple data assimilation (ES-MDA).


2019 ◽  
Author(s):  
Thiago M. D. Silva ◽  
Abelardo Barreto ◽  
Sinesio Pesco

Ensemble-based methods have been widely used in uncertainty quantification, particularly, in reservoir history matching. The search for a more robust method which holds high nonlinear problems is the focus for this area. The Ensemble Kalman Filter (EnKF) is a popular tool for these problems, but studies have noticed uncertainty in the results of the final ensemble, high dependent on the initial ensemble. The Ensemble Smoother (ES) is an alternative, with an easier impletation and low computational cost. However, it presents the same problem as the EnKF. The Ensemble Smoother with Multiple Data Assimilation (ES-MDA) seems to be a good alternative to these ensemble-based methods, once it assimilates tha same data multiple times. In this work, we analyze the efficiency of the Ensemble Smoother and the Ensemble Smoother with multiple data assimilation in a reservoir histoy matching of a turbidite model with 3 layers, considering permeability estimation and data mismatch.


2019 ◽  
Vol 26 (3) ◽  
pp. 325-338 ◽  
Author(s):  
Patrick Nima Raanes ◽  
Andreas Størksen Stordal ◽  
Geir Evensen

Abstract. Ensemble randomized maximum likelihood (EnRML) is an iterative (stochastic) ensemble smoother, used for large and nonlinear inverse problems, such as history matching and data assimilation. Its current formulation is overly complicated and has issues with computational costs, noise, and covariance localization, even causing some practitioners to omit crucial prior information. This paper resolves these difficulties and streamlines the algorithm without changing its output. These simplifications are achieved through the careful treatment of the linearizations and subspaces. For example, it is shown (a) how ensemble linearizations relate to average sensitivity and (b) that the ensemble does not lose rank during updates. The paper also draws significantly on the theory of the (deterministic) iterative ensemble Kalman smoother (IEnKS). Comparative benchmarks are obtained with the Lorenz 96 model with these two smoothers and the ensemble smoother using multiple data assimilation (ES-MDA).


Sign in / Sign up

Export Citation Format

Share Document