Numerical Comparison Between ES-MDA and Gradient-Based Optimization for History Matching of Reduced Reservoir Models

2021 ◽  
Author(s):  
M. A. Borregales Reverón ◽  
H. H. Holm ◽  
O. Møyner ◽  
S. Krogstad ◽  
K.-A. Lie

Abstract The Ensemble Smoother with Multiple Data Assimilation (ES-MDA) method has been popular for petroleum reservoir history matching. However, the increasing inclusion of automatic differentiation in reservoir models opens the possibility to history-match models using gradient-based optimization. Here, we discuss, study, and compare ES-MDA and a gradient-based optimization for history-matching waterflooding models. We apply these two methods to history match reduced GPSNet-type models. To study the methods, we use an implementation of ES-MDA and a gradient-based optimization in the open-source MATLAB Reservoir Simulation Toolbox (MRST), and compare the methods in terms of history-matching quality and computational efficiency. We show complementary advantages of both ES-MDA and gradient-based optimization. ES-MDA is suitable when an exact gradient is not available and provides a satisfactory forecast of future production that often envelops the reference history data. On the other hand, gradient-based optimization is efficient if the exact gradient is available, as it then requires a low number of model evaluations. If the exact gradient is not available, using an approximate gradient or ES-MDA are good alternatives and give equivalent results in terms of computational cost and quality predictions.

Geofluids ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-22 ◽  
Author(s):  
Sungil Kim ◽  
Baehyun Min ◽  
Seoyoon Kwon ◽  
Min-gon Chu

For an ensemble-based history matching of a channelized reservoir, loss of geological plausibility is challenging because of pixel-based manipulation of channel shape and connectivity despite sufficient conditioning to dynamic observations. Regarding the loss as artificial noise, this study designs a serial denoising autoencoder (SDAE) composed of two neural network filters, utilizes this machine learning algorithm for relieving noise effects in the process of ensemble smoother with multiple data assimilation (ES-MDA), and improves the overall history matching performance. As a training dataset of the SDAE, the static reservoir models are realized based on multipoint geostatistics and contaminated with two types of noise: salt and pepper noise and Gaussian noise. The SDAE learns how to eliminate the noise and restore the clean reservoir models. It does this through encoding and decoding processes using the noise realizations as inputs and the original realizations as outputs of the SDAE. The trained SDAE is embedded in the ES-MDA. The posterior reservoir models updated using Kalman gain are imported to the SDAE which then exports the purified prior models of the next assimilation. In this manner, a clear contrast among rock facies parameters during multiple data assimilations is maintained. A case study at a gas reservoir indicates that ES-MDA coupled with the noise remover outperforms a conventional ES-MDA. Improvement in the history matching performance resulting from denoising is also observed for ES-MDA algorithms combined with dimension reduction approaches such as discrete cosine transform, K-singular vector decomposition, and a stacked autoencoder. The results of this study imply that a well-trained SDAE has the potential to be a reliable auxiliary method for enhancing the performance of data assimilation algorithms if the computational cost required for machine learning is affordable.


2021 ◽  
Author(s):  
Ali Al-Turki ◽  
Obai Alnajjar ◽  
Majdi Baddourah ◽  
Babatunde Moriwawon

Abstract The algorithms and workflows have been developed to couple efficient model parameterization with stochastic, global optimization using a Multi-Objective Genetic Algorithm (MOGA) for global history matching, and coupled with an advanced workflow for streamline sensitivity-based inversion for fine-tuning. During parameterization the low-rank subsets of most influencing reservoir parameters are identified and propagated to MOGA to perform the field-level history match. Data misfits between the field historical data and simulation data are calculated with multiple realizations of reservoir models that quantify and capture reservoir uncertainty. Each generation of the optimization algorithms reduces the data misfit relative to the previous iteration. This iterative process continues until a satisfactory field-level history match is reached or there are no further improvements. The fine-tuning process of well-connectivity calibration is then performed with a streamlined sensitivity-based inversion algorithm to locally update the model to reduce well-level mismatch. In this study, an application of the proposed algorithms and workflow is demonstrated for model calibration and history matching. The synthetic reservoir model used in this study is discretized into millions of grid cells with hundreds of producer and injector wells. It is designed to generate several decades of production and injection history to evaluate and demonstrate the workflow. In field-level history matching, reservoir rock properties (e.g., permeability, fault transmissibility, etc.) are parameterized to conduct the global match of pressure and production rates. Grid Connectivity Transform (GCT) was used and assessed to parameterize the reservoir properties. In addition, the convergence rate and history match quality of MOGA was assessed during the field (global) history matching. Also, the effectiveness of the streamline-based inversion was evaluated by quantifying the additional improvement in history matching quality per well. The developed parametrization and optimization algorithms and workflows revealed the unique features of each of the algorithms for model calibration and history matching. This integrated workflow has successfully defined and carried uncertainty throughout the history matching process. Following the successful field-level history match, the well-level history matching was conducted using streamline sensitivity-based inversion, which further improved the history match quality and conditioned the model to historical production and injection data. In general, the workflow results in enhanced history match quality in a shorter turnaround time. The geological realism of the model is retained for robust prediction and development planning.


2009 ◽  
Vol 12 (03) ◽  
pp. 446-454 ◽  
Author(s):  
Frode Georgsen ◽  
Anne R. Syversveen ◽  
Ragnar Hauge ◽  
Jan I. Tollefsrud ◽  
Morten Fismen

Summary The possibility of updating reservoir models with new well information is important for good reservoir management. The process of drilling a new well through to update of the static model and to history match the new model is often a time-consuming process. This paper presents new algorithms that allow the rapid updating of object-based facies models by further development of already existing models. An existing facies realization is adjusted to match new well observations by changing objects locally or adding/removing objects if required. Parts of the realization that are not influenced by the new wells are not changed. A local update of a specified region of the reservoir can be performed, leaving the rest of the reservoir unchanged or with minimum change because of new wells. In this method, the main focus is the algorithm implemented to fulfill well conditioning. The effect of this algorithm on different object models is presented through several case studies. These studies show how the local update consistently includes new information while leaving the rest of the realization unperturbed, thereby preserving the good history match. Introduction Rapid updating of static and dynamic reservoir models is important for reservoir management. Continual maintenance of history-matched models allows for right-time decisions to optimize the reservoir performance. The process of drilling a new well through to updating of the static model and history matching of the new model is often a time-consuming process. Static reservoir models and history matches are updated only intermittently, and there is typically a 1- to 2-year delay between the drilling of a new well and the generation of a reliable history-matched model that incorporates the new information. This paper presents new algorithms that allow rapid updating of static reservoir models when new wells are drilled. The static-model update is designed to keep as much of the existing history match as possible by locally adjusting the existing static model to the new well data. As the name implies, object models use a set of facies objects to generate a facies realization. Stochastic object-modeling algorithms have been developed to improve the representation of facies architectures in complex heterogeneous reservoirs and, thereby, to obtain more-realistic dynamic behavior of the reservoir models. We consider the main advantages of object models to be the ability to create geologically realistic facies elements (objects) and control the interaction between them, to correlate observations between wells (connectivity) explicitly, and the possibility of applying intraobject petrophysical trends.


SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 464-479 ◽  
Author(s):  
B. Todd Hoffman ◽  
Jef K. Caers ◽  
Xian-Huan Wen ◽  
Sebastien B. Strebelle

Summary This paper presents an innovative methodology to integrate prior geologic information, well-log data, seismic data, and production data into a consistent 3D reservoir model. Furthermore, the method is applied to a real channel reservoir from the African coast. The methodology relies on the probability-perturbation method (PPM). Perturbing probabilities rather than actual petrophysical properties guarantees that the conceptual geologic model is maintained and that any history-matching-related artifacts are avoided. Creating reservoir models that match all types of data are likely to have more prediction power than methods in which some data are not honored. The first part of the paper reviews the details of the PPM, and the next part of this paper describes the additional work that is required to history-match real reservoirs using this method. Then, a geological description of the reservoir case study is provided, and the procedure to build 3D reservoir models that are only conditioned to the static data is covered. Because of the character of the field, the channels are modeled with a multiple-point geostatistical method. The channel locations are perturbed in a manner such that the oil, water, and gas rates from the reservoir more accurately match the rates observed in the field. Two different geologic scenarios are used, and multiple history-matched models are generated for each scenario. The reservoir has been producing for approximately 5 years, but the models are matched only to the first 3 years of production. Afterward, to check predictive power, the matched models are run for the last 1½ years, and the results compare favorably with the field data. Introduction Reservoir models are constructed to better understand reservoir behavior and to better predict reservoir response. Economic decisions are often based on the predictions from reservoir models; therefore, such predictions need to be as accurate as possible. To achieve this goal, the reservoir model should honor all sources of data, including well-log, seismic, geologic information, and dynamic (production rate and pressure) data. Incorporating dynamic data into the reservoir model is generally known as history matching. History matching is difficult because it poses a nonlinear inverse problem in the sense that the relationship between the reservoir model parameters and the dynamic data is highly nonlinear and multiple solutions are avail- able. Therefore, history matching is often done with a trial-and-error method. In real-world applications of history matching, reservoir engineers manually modify an initial model provided by geoscientists until the production data are matched. The initial model is built based on geological and seismic data. While attempts are usually made to honor these other data as much as possible, often the history-matched models are unrealistic from a geological (and geophysical) point of view. For example, permeability is often altered to increase or decrease flow in areas where a mismatch is observed; however, the permeability alterations usually come in the form of box-shaped or pipe-shaped geometries centered around wells or between wells and tend to be devoid of any geologica. considerations. The primary focus lies in obtaining a history match.


2020 ◽  
Author(s):  
Konrad Wojnar ◽  
Jon S?trom ◽  
Tore Felix Munck ◽  
Martha Stunell ◽  
Stig Sviland-Østre ◽  
...  

Abstract The aim of the study was to create an ensemble of equiprobable models that could be used for improving the reservoir management of the Vilje field. Qualitative and quantitative workflows were developed to systematically and efficiently screen, analyze and history match an ensemble of reservoir simulation models to production and 4D seismic data. The goal of developing the workflows is to increase the utilization of data from 4D seismic surveys for reservoir characterization. The qualitative and quantitative workflows are presented, describing their benefits and challenges. The data conditioning produced a set of history matched reservoir models which could be used in the field development decision making process. The proposed workflows allowed for identification of outlying prior and posterior models based on key features where observed data was not covered by the synthetic 4D seismic realizations. As a result, suggestions for a more robust parameterization of the ensemble were made to improve data coverage. The existing history matching workflow efficiently integrated with the quantitative 4D seismic history matching workflow allowing for the conditioning of the reservoir models to production and 4D data. Thus, the predictability of the models was improved. This paper proposes a systematic and efficient workflow using ensemble-based methods to simultaneously screen, analyze and history match production and 4D seismic data. The proposed workflow improves the usability of 4D seismic data for reservoir characterization, and in turn, for the reservoir management and the decision-making processes.


2013 ◽  
Vol 55 ◽  
pp. 84-95 ◽  
Author(s):  
Leila Heidari ◽  
Véronique Gervais ◽  
Mickaële Le Ravalec ◽  
Hans Wackernagel

2019 ◽  
Author(s):  
Thiago M. D. Silva ◽  
Abelardo Barreto ◽  
Sinesio Pesco

Ensemble-based methods have been widely used in uncertainty quantification, particularly, in reservoir history matching. The search for a more robust method which holds high nonlinear problems is the focus for this area. The Ensemble Kalman Filter (EnKF) is a popular tool for these problems, but studies have noticed uncertainty in the results of the final ensemble, high dependent on the initial ensemble. The Ensemble Smoother (ES) is an alternative, with an easier impletation and low computational cost. However, it presents the same problem as the EnKF. The Ensemble Smoother with Multiple Data Assimilation (ES-MDA) seems to be a good alternative to these ensemble-based methods, once it assimilates tha same data multiple times. In this work, we analyze the efficiency of the Ensemble Smoother and the Ensemble Smoother with multiple data assimilation in a reservoir histoy matching of a turbidite model with 3 layers, considering permeability estimation and data mismatch.


2005 ◽  
Vol 8 (05) ◽  
pp. 426-436 ◽  
Author(s):  
Hao Cheng ◽  
Arun Kharghoria ◽  
Zhong He ◽  
Akhil Datta-Gupta

Summary We propose a novel approach to history matching finite-difference models that combines the advantages of streamline models with the versatility of finite-difference simulation. Current streamline models are limited in their ability to incorporate complex physical processes and cross-streamline mechanisms in a computationally efficient manner. A unique feature of streamline models is their ability to analytically compute the sensitivity of the production data with respect to reservoir parameters using a single flow simulation. These sensitivities define the relationship between changes in production response because of small changes in reservoir parameters and, thus, form the basis for many history-matching algorithms. In our approach, we use the streamline-derived sensitivities to facilitate history matching during finite-difference simulation. First, the velocity field from the finite-difference model is used to compute streamline trajectories, time of flight, and parameter sensitivities. The sensitivities are then used in an inversion algorithm to update the reservoir model during finite-difference simulation. The use of a finite-difference model allows us to account for detailed process physics and compressibility effects. Although the streamline-derived sensitivities are only approximate, they do not seem to noticeably impact the quality of the match or the efficiency of the approach. For history matching, we use a generalized travel-time inversion (GTTI) that is shown to be robust because of its quasilinear properties and that converges in only a few iterations. The approach is very fast and avoids many of the subjective judgments and time-consuming trial-and-error steps associated with manual history matching. We demonstrate the power and utility of our approach with a synthetic example and two field examples. The first one is from a CO2 pilot area in the Goldsmith San Andreas Unit (GSAU), a dolomite formation in west Texas with more than 20 years of waterflood production history. The second example is from a Middle Eastern reservoir and involves history matching a multimillion-cell geologic model with 16 injectors and 70 producers. The final model preserved all of the prior geologic constraints while matching 30 years of production history. Introduction Geological models derived from static data alone often fail to reproduce the field production history. Reconciling geologic models to the dynamic response of the reservoir is critical to building reliable reservoir models. Classical history-matching procedures whereby reservoir parameters are adjusted manually by trial and error can be tedious and often yield a reservoir description that may not be realistic or consistent with the geologic interpretation. In recent years, several techniques have been developed for integrating production data into reservoir models. Integration of dynamic data typically requires a least-squares-based minimization to match the observed and calculated production response. There are several approaches to such minimization, and these can be classified broadly into three categories: gradient-based methods, sensitivity-based methods, and derivative-free methods. The derivative-free approaches, such as simulated annealing or genetic algorithms, require numerous flow simulations and can be computationally prohibitive for field-scale applications. Gradient-based methods have been used widely for automatic history matching, although the convergence rates of these methods are typically slower than the sensitivity-based methods such as the Gauss-Newton or the LSQR method. An integral part of the sensitivity-based methods is the computation of sensitivity coefficients. These sensitivities are simply partial derivatives that define the change in production response because of small changes in reservoir parameters. There are several approaches to calculating sensitivity coefficients, and these generally fall into one of three categories: perturbation method, direct method, and adjoint-state methods. Conceptually, the perturbation approach is the simplest and requires the fewest changes in an existing code. Sensitivities are estimated simply by perturbing the model parameters one at a time by a small amount and then computing the corresponding production response. This approach requires (N+1) forward simulations, where N is the number of parameters. Obviously, it can be computationally prohibitive for reservoir models with many parameters. In the direct or sensitivity equation method, the flow and transport equations are differentiated to obtain expressions for the sensitivity coefficients. Because there is one equation for each parameter, this approach requires the same amount of work. A variation of this method, called the gradient simulator method, uses the discretized version of the flow equations and takes advantage of the fact that the coefficient matrix remains unchanged for all the parameters and needs to be decomposed only once. Thus, sensitivity computation for each parameter now requires a matrix/vector multiplication. This method can also be computationally expensive for a large number of parameters. Finally, the adjoint-state method requires derivation and solution of adjoint equations that can be quite cumbersome for multiphase-flow applications. Furthermore, the number of adjoint solutions will generally depend on the amount of production data and, thus, the length of the production history.


2019 ◽  
Vol 24 (1) ◽  
pp. 217-239
Author(s):  
Kristian Fossum ◽  
Trond Mannseth ◽  
Andreas S. Stordal

AbstractMultilevel ensemble-based data assimilation (DA) as an alternative to standard (single-level) ensemble-based DA for reservoir history matching problems is considered. Restricted computational resources currently limit the ensemble size to about 100 for field-scale cases, resulting in large sampling errors if no measures are taken to prevent it. With multilevel methods, the computational resources are spread over models with different accuracy and computational cost, enabling a substantially increased total ensemble size. Hence, reduced numerical accuracy is partially traded for increased statistical accuracy. A novel multilevel DA method, the multilevel hybrid ensemble Kalman filter (MLHEnKF) is proposed. Both the expected and the true efficiency of a previously published multilevel method, the multilevel ensemble Kalman filter (MLEnKF), and the MLHEnKF are assessed for a toy model and two reservoir models. A multilevel sequence of approximations is introduced for all models. This is achieved via spatial grid coarsening and simple upscaling for the reservoir models, and via a designed synthetic sequence for the toy model. For all models, the finest discretization level is assumed to correspond to the exact model. The results obtained show that, despite its good theoretical properties, MLEnKF does not perform well for the reservoir history matching problems considered. We also show that this is probably caused by the assumptions underlying its theoretical properties not being fulfilled for the multilevel reservoir models considered. The performance of MLHEnKF, which is designed to handle restricted computational resources well, is quite good. Furthermore, the toy model is utilized to set up a case where the assumptions underlying the theoretical properties of MLEnKF are fulfilled. On that case, MLEnKF performs very well and clearly better than MLHEnKF.


Sign in / Sign up

Export Citation Format

Share Document