reservoir history matching
Recently Published Documents


TOTAL DOCUMENTS

67
(FIVE YEARS 13)

H-INDEX

9
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Ryan Santoso ◽  
Xupeng He ◽  
Marwa Alsinan ◽  
Ruben Figueroa Hernandez ◽  
Hyung Kwak ◽  
...  

Abstract History matching is a critical step within the reservoir management process to synchronize the simulation model with the production data. The history-matched model can be used for planning optimum field development and performing optimization and uncertainty quantifications. We present a novel history matching workflow based on a Bayesian framework that accommodates subsurface uncertainties. Our workflow involves three different model resolutions within the Bayesian framework: 1) a coarse low-fidelity model to update the prior range, 2) a fine low-fidelity model to represent the high-fidelity model, and 3) a high-fidelity model to re-construct the real response. The low-fidelity model is constructed by a multivariate polynomial function, while the high-fidelity model is based on the reservoir simulation model. We firstly develop a coarse low-fidelity model using a two-level Design of Experiment (DoE), which aims to provide a better prior. We secondly use Latin Hypercube Sampling (LHS) to construct the fine low-fidelity model to be deployed in the Bayesian runs, where we use the Metropolis-Hastings algorithm. Finally, the posterior is fed into the high-fidelity model to evaluate the matching quality. This work demonstrates the importance of including uncertainties in history matching. Bayesian provides a robust framework to allow uncertainty quantification within the reservoir history matching. Under uniform prior, the convergence of the Bayesian is very sensitive to the parameter ranges. When the solution is far from the mean of the parameter ranges, the Bayesian introduces bios and deviates from the observed data. Our results show that updating the prior from the coarse low-fidelity model accelerates the Bayesian convergence and improves the matching convergence. Bayesian requires a huge number of runs to produce an accurate posterior. Running the high-fidelity model multiple times is expensive. Our workflow tackles this problem by deploying a fine low-fidelity model to represent the high-fidelity model in the main runs. This fine low-fidelity model is fast to run, while it honors the physics and accuracy of the high-fidelity model. We also use ANOVA sensitivity analysis to measure the importance of each parameter. The ranking gives awareness to the significant ones that may contribute to the matching accuracy. We demonstrate our workflow for a geothermal reservoir with static and operational uncertainties. Our workflow produces accurate matching of thermal recovery factor and produced-enthalpy rate with physically-consistent posteriors. We present a novel workflow to account for uncertainty in reservoir history matching involving multi-resolution interaction. The proposed method is generic and can be readily applied within existing history-matching workflows in reservoir simulation.


Processes ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1980
Author(s):  
Lihua Shen ◽  
Hui Liu ◽  
Zhangxin Chen

In this paper, the deterministic ensemble Kalman filter is implemented with a parallel technique of the message passing interface based on our in-house black oil simulator. The implementation is separated into two cases: (1) the ensemble size is greater than the processor number and (2) the ensemble size is smaller than or equal to the processor number. Numerical experiments for estimations of three-phase relative permeabilities represented by power-law models with both known endpoints and unknown endpoints are presented. It is shown that with known endpoints, good estimations can be obtained. With unknown endpoints, good estimations can still be obtained using more observations and a larger ensemble size. Computational time is reported to show that the run time is greatly reduced with more CPU cores. The MPI speedup is over 70% for a small ensemble size and 77% for a large ensemble size with up to 640 CPU cores.


2019 ◽  
Vol 24 (1) ◽  
pp. 217-239
Author(s):  
Kristian Fossum ◽  
Trond Mannseth ◽  
Andreas S. Stordal

AbstractMultilevel ensemble-based data assimilation (DA) as an alternative to standard (single-level) ensemble-based DA for reservoir history matching problems is considered. Restricted computational resources currently limit the ensemble size to about 100 for field-scale cases, resulting in large sampling errors if no measures are taken to prevent it. With multilevel methods, the computational resources are spread over models with different accuracy and computational cost, enabling a substantially increased total ensemble size. Hence, reduced numerical accuracy is partially traded for increased statistical accuracy. A novel multilevel DA method, the multilevel hybrid ensemble Kalman filter (MLHEnKF) is proposed. Both the expected and the true efficiency of a previously published multilevel method, the multilevel ensemble Kalman filter (MLEnKF), and the MLHEnKF are assessed for a toy model and two reservoir models. A multilevel sequence of approximations is introduced for all models. This is achieved via spatial grid coarsening and simple upscaling for the reservoir models, and via a designed synthetic sequence for the toy model. For all models, the finest discretization level is assumed to correspond to the exact model. The results obtained show that, despite its good theoretical properties, MLEnKF does not perform well for the reservoir history matching problems considered. We also show that this is probably caused by the assumptions underlying its theoretical properties not being fulfilled for the multilevel reservoir models considered. The performance of MLHEnKF, which is designed to handle restricted computational resources well, is quite good. Furthermore, the toy model is utilized to set up a case where the assumptions underlying the theoretical properties of MLEnKF are fulfilled. On that case, MLEnKF performs very well and clearly better than MLHEnKF.


SPE Journal ◽  
2019 ◽  
Vol 25 (01) ◽  
pp. 056-080
Author(s):  
Cong Xiao ◽  
Leng Tian ◽  
Lufeng Zhang ◽  
Guangdong Wang ◽  
Ya Deng

Summary Finding multiple posterior realizations through a reservoir history-matching procedure is of significance for uncertainty quantification, risk analysis, and decision making in the course of reservoir closed-loop management. Previously, an efficient, distributed, optimization algorithm—global linear-regression (GLR) with distributed Gauss-Newton (GLR-DGN)—has been proposed in the literature to iteratively minimize multiple objective functions by performing Gauss-Newton (GN) optimizations concurrently while dynamically sharing information between dispersed regions in a reduced parameter space. However, theoretically, the required number of initial training models should be larger than the number of parameters to guarantee a unique solution for the GLR equation for sensitivity-matrix estimation. This limitation makes the large-scale reservoir history-matching problem with a large amount of parameters almost intractable. We enrich the previous history-matching framework by integrating our recently proposed smooth local parameterization (SLP) and DGN for the sensitivity-matrix calculation. Motivated by the fact that one specific flow response mainly depends on a few influential or local parameters, which can be generally identified by the physical position of the wells (e.g., the parameters in the zone surrounding the wells), which is particularly true for large-scale reservoir models, this paper presents a new integration of subdomain linear-regression (SLR) with DGN, referred to as SLR-DGN. This SLP allows us to independently represent the globally spatial parameter field within low-order parameter subspaces in each subdomain. On the basis of the SLP procedure, only a few training models are required to compute local sensitivity of the response functions using a subdomain linear regression. SLP is a linear transformation with smoothness and differentiability, which makes it particularly compatible with Newton-like gradient-based optimization algorithms. Furthermore, we also introduce an adaptive scheme, named weighting smooth local parameterization (WSLP), in which the minimization algorithm adaptively determines the weighting coefficients and the optimal domain decomposition (DD) correspondingly, to mitigate the negative effects of an inappropriate DD strategy. We support our framework with numerical experiments for a four-variable toy model and a modified version of the sensitivity analysis of the impact of geological uncertainties on production (SAIGUP) model with spatially dependent parameters. Comparisons with previous GLR-DGN have shown that our new framework can generate comparable and even better results with significantly reduced computational cost. This SLP has high scalability, because the number of training models depends primarily on the number of local parameters in each subdomain and not on the dimension of the underlying full-order model. Activating more subdomains results in fewer local parameter patterns and enables us to run fewer training models. For a large-scale case study in this work, to optimize 412 global parameters, SLR-DGN needs only 100 initial-model simulations. In comparison to GLR-DGN where the parameters are defined over the entire domain, the central-processing-unit cost is reduced by a factor of several orders of magnitude, while retaining reasonable accuracy.


2019 ◽  
Author(s):  
Thiago M. D. Silva ◽  
Abelardo Barreto ◽  
Sinesio Pesco

Ensemble-based methods have been widely used in uncertainty quantification, particularly, in reservoir history matching. The search for a more robust method which holds high nonlinear problems is the focus for this area. The Ensemble Kalman Filter (EnKF) is a popular tool for these problems, but studies have noticed uncertainty in the results of the final ensemble, high dependent on the initial ensemble. The Ensemble Smoother (ES) is an alternative, with an easier impletation and low computational cost. However, it presents the same problem as the EnKF. The Ensemble Smoother with Multiple Data Assimilation (ES-MDA) seems to be a good alternative to these ensemble-based methods, once it assimilates tha same data multiple times. In this work, we analyze the efficiency of the Ensemble Smoother and the Ensemble Smoother with multiple data assimilation in a reservoir histoy matching of a turbidite model with 3 layers, considering permeability estimation and data mismatch.


Sign in / Sign up

Export Citation Format

Share Document