Multiple Model Seismic and Production History Matching: A Case Study

SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 418-430 ◽  
Author(s):  
Karl D. Stephen ◽  
Juan Soldo ◽  
Colin Macbeth ◽  
Mike A. Christie

Summary Time-lapse (or 4D) seismic is increasingly being used as a qualitative description of reservoir behavior for management and decision-making purposes. When combined quantitatively with geological and flow modeling as part of history matching, improved predictions of reservoir production can be obtained. Here, we apply a method of multiple-model history matching based on simultaneous comparison of spatial data offered by seismic as well as individual well-production data. Using a petroelastic transform and suitable rescaling, forward-modeled simulations are converted into predictions of seismic impedance attributes and compared to observed data by calculation of a misfit. A similar approach is applied to dynamic well data. This approach improves on gradient-based methods by avoiding entrapment in local minima. We demonstrate the method by applying it to the UKCS Schiehallion reservoir, updating the operator's model. We consider a number of parameters to be uncertain. The reservoir's net to gross is initially updated to better match the observed baseline acoustic impedance derived from the RMS amplitudes of the migrated stack. We then history match simultaneously for permeability, fault transmissibility multipliers, and the petroelastic transform parameters. Our results show a good match to the observed seismic and well data with significant improvement to the base case. Introduction Reservoir management requires tools such as simulation models to predict asset behavior. History matching is often employed to alter these models so that they compare favorably to observed well rates and pressures. This well information is obtained at discrete locations and thus lacks the areal coverage necessary to accurately constrain dynamic reservoir parameters such as permeability and the location and effect of faults. Time-lapse seismic captures the effect of pressure and saturation on seismic impedance attributes, giving 2D maps or 3D volumes of the missing information. The process of seismic history matching attempts to overlap the benefits of both types of information to improve estimates of the reservoir model parameters. We first present an automated multiple-model history-matching method that includes time-lapse seismic along with production data, based on an integrated workflow (Fig. 1). It improves on the classical approach, wherein the engineer manually adjusts parameters in the simulation model. Our method also improves on gradient-based methods, such as Steepest Descent, Gauss-Newton, and Levenberg-Marquardt algorithms (e.g., Lépine et al. 1999;Dong and Oliver 2003; Gosselin et al. 2003; Mezghani et al. 2004), which are good at finding local likelihood maxima but can fail to find the global maximum. Our method is also faster than stochastic methods such as genetic algorithms and simulated annealing, which often require more simulations and may have slower convergence rates. Finally, multiple models are generated, enabling posterior uncertainty analysis in a Bayesian framework (as in Stephen and MacBeth 2006a).

SPE Journal ◽  
2007 ◽  
Vol 12 (04) ◽  
pp. 475-485 ◽  
Author(s):  
Hao Cheng ◽  
Adedayo Stephen Oyerinde ◽  
Akhil Datta-Gupta ◽  
William J. Milliken

Summary Reconciling high-resolution geologic models to field production history is still by far the most time-consuming aspect of the workflow for both geoscientists and engineers. Recently, streamline-based assisted and automatic history-matching techniques have shown great potential in this regard, and several field applications have demonstrated the feasibility of the approach. However, most of these applications have been limited to two-phase water/oil flow under incompressible or slightly compressible conditions. We propose an approach to history matching three-phase flow using a novel compressible streamline formulation and streamline-derived analytic sensitivities. First, we use a generalized streamline model to account for compressible flow by introducing an "effective density" of total fluids along streamlines. This density term rigorously captures changes in fluid volumes with pressure and is easily traced along streamlines. A density-dependent source term in the saturation equation further accounts for the pressure effects during saturation calculations along streamlines. Our approach preserves the 1D nature of the saturation equation and all the associated advantages of the streamline approach with only minor modifications to existing streamline models. Second, we analytically compute parameter sensitivities that define the relationship between the reservoir properties and the production response, viz. water-cut and gas/oil ratio (GOR). These sensitivities are an integral part of history matching, and streamline models permit efficient computation of these sensitivities through a single flow simulation. Finally, for history matching, we use a generalized travel-time inversion that has been shown to be robust because of its quasilinear properties and converges in only a few iterations. The approach is very fast and avoids much of the subjective judgment and time-consuming trial-and-error inherent in manual history matching. We demonstrate the power and utility of our approach using both synthetic and field-scale examples. The synthetic case is used to validate our method. It entails the joint integration of water cut and gas/oil ratios (GORs) from a nine-spot pattern in reconstructing a reference permeability field. The field-scale example is a modified version of the ninth SPE comparative study and consists of 25 producers, 1 injector, and aquifer influx. Starting with a prior geologic model, we integrate water-cut and GOR history by the generalized travel-time inversion. Our approach is very fast and preserves the geologic continuity. Introduction Integration of production data typically requires the minimization of a predefined data misfit and penalty terms to match the observed and calculated production response (Oliver 1994; Vasco et al. 1999; Datta-Gupta et al. 2001; Reis et al. 2000; Landa et al. 1996; Anterion et al. 1989; Wu et al. 1999; Wang and Kovscek 2000; Sahni and Horne 2005). There are several approaches to such minimization, and these can be broadly classified into three categories: gradient-based methods, sensitivity-based methods, and derivative-free methods (Oliver 1994). The derivative-free approaches such as simulated annealing and genetic algorithm require numerous flow simulations and can be computationally prohibitive for field-scale applications with very large numbers of parameters. Gradient-based methods have been widely used for automatic history matching, although the rate of convergence of these methods is typically slower than that of the sensitivity-based methods, such as the Gauss-Newton or the LSQR method (Vega et al. 2004). An integral part of the sensitivity-based methods is the computation of sensitivity coefficients. There are several approaches to calculating sensitivity coefficients, and these generally fall into one of the three following categories: perturbation method, direct method, and adjoint state methods. The perturbation approach is the simplest and requires the fewest changes to an existing code. This approach requires (N+1) forward simulations, where N is the number of parameters. Obviously, this can be computationally prohibitive for reservoir models with many parameters. In the direct, or sensitivity-equation, method, the flow and transport equations are differentiated to obtain expressions for the sensitivity coefficients (Vasco et al. 1999). Because there is one equation for each parameter, this approach can require the same amount of work. A variation of this method, called the gradient simulator method, utilizes the discretized version of the flow equations and takes advantage of the fact that the coefficient matrix remains unchanged for all parameters and needs to be decomposed only once (Anterion et al. 1989). Thus, sensitivity computation for each parameter now requires a matrix-vector multiplication. This method obviously represents a significant improvement, but still can be computationally demanding for large number of parameters. Finally, the adjoint-state method requires derivation and solution of adjoint equations that can be significantly smaller in number compared to the sensitivity equations. The adjoint equations are obtained by minimizing the production data misfit with flow equations as constraint, and the implementation of the method can be quite complex and cumbersome for multiphase flow applications (Wu et al. 1999). Furthermore, the number of adjoint solutions will generally depend on the amount of production data and thus can be restrictive for field-scale applications.


2021 ◽  
Author(s):  
Xindan Wang ◽  
Yin Zhang ◽  
Abhijit Dandekar ◽  
Yudou Wang

Abstract Chemical flooding has been widely used to enhance oil recovery after conventional waterflooding. However, it is always a challenge to model chemical flooding accurately since many of the model parameters of the chemical flooding cannot be measured accurately in the lab and even some parameters cannot be obtained from the lab. Recently, the ensemble-based assisted history matching techniques have been proven to be efficient and effective in simultaneously estimating multiple model parameters. Therefore, this study validates the effectiveness of the ensemble-based method in estimating model parameters for chemical flooding simulation, and the half-iteration EnKF (HIEnKF) method has been employed to conduct the assisted history matching. In this work, five surfactantpolymer (SP) coreflooding experiments have been first conducted, and the corresponding core scale simulation models have been built to simulate the coreflooding experiments. Then the HIEnKF method has been applied to calibrate the core scale simulation models by assimilating the observed data including cumulative oil production and pressure drop from the corresponding coreflooding experiments. The HIEnKF method has been successively applied to simultaneously estimate multiple model parameters, including porosity and permeability fields, relative permeabilities, polymer viscosity curve, polymer adsorption curve, surfactant interfacial tension (IFT) curve and miscibility function curve, for the SP flooding simulation model. There exists a good agreement between the updated simulation results and observation data, indicating that the updated model parameters are appropriate to characterize the properties of the corresponding porous media and the fluid flow properties in it. At the same time, the effectiveness of the ensemble-based assisted history matching method in chemical enhanced oil recovery (EOR) simulation has been validated. Based on the validated simulation model, numerical simulation tests have been conducted to investigate the influence of injection schemes and operating parameters of SP flooding on the ultimate oil recovery performance. It has been found that the polymer concentration, surfactant concentration and slug size of SP flooding have a significant impact on oil recovery, and these parameters need to be optimized to achieve the maximum economic benefit.


SPE Journal ◽  
2009 ◽  
Vol 15 (02) ◽  
pp. 509-525 ◽  
Author(s):  
Yudou Wang ◽  
Gaoming Li ◽  
Albert C. Reynolds

Summary With the ensemble Kalman filter (EnKF) or smoother (EnKS), it is easy to adjust a wide variety of model parameters by assimilation of dynamic data. We focus first on the case where realizations and estimates of the depths of the initial fluid contacts, as well as grid- block rock-property fields, are generated by matching production data with the EnKS. Then we add the parameters defining power law relative permeability curves to the set of parameters estimated by assimilating production data with EnKS. The efficiency of EnKF and EnKS arises because data are assimilated sequentially in time and so "history matching data" requires only one forward run of the reservoir simulator for each ensemble member. For EnKS and EnKF to yield reliable characterizations of the uncertainty in model parameters and future performance predictions, the updated reservoir-simulation variables (e.g., saturations and pressures) must be statistically consistent with the realizations of these variables that would be obtained by rerunning the simulator from time zero using the updated model parameters. This statistical consistency can be established only under assumptions of Gaussi- anity and linearity that do not normally hold. Here, we use iterative EnKS methods that are statistically consistent, and show that, for the problems considered here, iteration significantly improves the performance of EnKS.


Geophysics ◽  
2019 ◽  
Vol 85 (1) ◽  
pp. M15-M31 ◽  
Author(s):  
Mingliang Liu ◽  
Dario Grana

We have developed a time-lapse seismic history matching framework to assimilate production data and time-lapse seismic data for the prediction of static reservoir models. An iterative data assimilation method, the ensemble smoother with multiple data assimilation is adopted to iteratively update an ensemble of reservoir models until their predicted observations match the actual production and seismic measurements and to quantify the model uncertainty of the posterior reservoir models. To address computational and numerical challenges when applying ensemble-based optimization methods on large seismic data volumes, we develop a deep representation learning method, namely, the deep convolutional autoencoder. Such a method is used to reduce the data dimensionality by sparsely and approximately representing the seismic data with a set of hidden features to capture the nonlinear and spatial correlations in the data space. Instead of using the entire seismic data set, which would require an extremely large number of models, the ensemble of reservoir models is iteratively updated by conditioning the reservoir realizations on the production data and the low-dimensional hidden features extracted from the seismic measurements. We test our methodology on two synthetic data sets: a simplified 2D reservoir used for method validation and a 3D application with multiple channelized reservoirs. The results indicate that the deep convolutional autoencoder is extremely efficient in sparsely representing the seismic data and that the reservoir models can be accurately updated according to production data and the reparameterized time-lapse seismic data.


2005 ◽  
Vol 8 (05) ◽  
pp. 426-436 ◽  
Author(s):  
Hao Cheng ◽  
Arun Kharghoria ◽  
Zhong He ◽  
Akhil Datta-Gupta

Summary We propose a novel approach to history matching finite-difference models that combines the advantages of streamline models with the versatility of finite-difference simulation. Current streamline models are limited in their ability to incorporate complex physical processes and cross-streamline mechanisms in a computationally efficient manner. A unique feature of streamline models is their ability to analytically compute the sensitivity of the production data with respect to reservoir parameters using a single flow simulation. These sensitivities define the relationship between changes in production response because of small changes in reservoir parameters and, thus, form the basis for many history-matching algorithms. In our approach, we use the streamline-derived sensitivities to facilitate history matching during finite-difference simulation. First, the velocity field from the finite-difference model is used to compute streamline trajectories, time of flight, and parameter sensitivities. The sensitivities are then used in an inversion algorithm to update the reservoir model during finite-difference simulation. The use of a finite-difference model allows us to account for detailed process physics and compressibility effects. Although the streamline-derived sensitivities are only approximate, they do not seem to noticeably impact the quality of the match or the efficiency of the approach. For history matching, we use a generalized travel-time inversion (GTTI) that is shown to be robust because of its quasilinear properties and that converges in only a few iterations. The approach is very fast and avoids many of the subjective judgments and time-consuming trial-and-error steps associated with manual history matching. We demonstrate the power and utility of our approach with a synthetic example and two field examples. The first one is from a CO2 pilot area in the Goldsmith San Andreas Unit (GSAU), a dolomite formation in west Texas with more than 20 years of waterflood production history. The second example is from a Middle Eastern reservoir and involves history matching a multimillion-cell geologic model with 16 injectors and 70 producers. The final model preserved all of the prior geologic constraints while matching 30 years of production history. Introduction Geological models derived from static data alone often fail to reproduce the field production history. Reconciling geologic models to the dynamic response of the reservoir is critical to building reliable reservoir models. Classical history-matching procedures whereby reservoir parameters are adjusted manually by trial and error can be tedious and often yield a reservoir description that may not be realistic or consistent with the geologic interpretation. In recent years, several techniques have been developed for integrating production data into reservoir models. Integration of dynamic data typically requires a least-squares-based minimization to match the observed and calculated production response. There are several approaches to such minimization, and these can be classified broadly into three categories: gradient-based methods, sensitivity-based methods, and derivative-free methods. The derivative-free approaches, such as simulated annealing or genetic algorithms, require numerous flow simulations and can be computationally prohibitive for field-scale applications. Gradient-based methods have been used widely for automatic history matching, although the convergence rates of these methods are typically slower than the sensitivity-based methods such as the Gauss-Newton or the LSQR method. An integral part of the sensitivity-based methods is the computation of sensitivity coefficients. These sensitivities are simply partial derivatives that define the change in production response because of small changes in reservoir parameters. There are several approaches to calculating sensitivity coefficients, and these generally fall into one of three categories: perturbation method, direct method, and adjoint-state methods. Conceptually, the perturbation approach is the simplest and requires the fewest changes in an existing code. Sensitivities are estimated simply by perturbing the model parameters one at a time by a small amount and then computing the corresponding production response. This approach requires (N+1) forward simulations, where N is the number of parameters. Obviously, it can be computationally prohibitive for reservoir models with many parameters. In the direct or sensitivity equation method, the flow and transport equations are differentiated to obtain expressions for the sensitivity coefficients. Because there is one equation for each parameter, this approach requires the same amount of work. A variation of this method, called the gradient simulator method, uses the discretized version of the flow equations and takes advantage of the fact that the coefficient matrix remains unchanged for all the parameters and needs to be decomposed only once. Thus, sensitivity computation for each parameter now requires a matrix/vector multiplication. This method can also be computationally expensive for a large number of parameters. Finally, the adjoint-state method requires derivation and solution of adjoint equations that can be quite cumbersome for multiphase-flow applications. Furthermore, the number of adjoint solutions will generally depend on the amount of production data and, thus, the length of the production history.


2013 ◽  
Vol 748 ◽  
pp. 614-618
Author(s):  
Bao Yi Jiang ◽  
Zhi Ping Li ◽  
Cheng Wen Zhang ◽  
Xi Gang Wang

Numerical reservoir models are constructed from limited available static and dynamic data, and history matching is a process of changing model parameters to find a set of values that will yield a reservoir simulation prediction of data that matches the observed historical production data. To minimize the objective function involved in the history matching procedure, we need to apply the optimization algorithms. This paper is based on the optimization algorithms used in automatic history matching. Several optimization algorithms will be compared in this paper.


SPE Journal ◽  
2010 ◽  
Vol 15 (04) ◽  
pp. 1062-1076 ◽  
Author(s):  
A.. Seiler ◽  
S.I.. I. Aanonsen ◽  
G.. Evensen ◽  
J.C.. C. Rivenæs

Summary Although typically large uncertainties are associated with reservoir structure, the reservoir geometry is usually fixed to a single interpretation in history-matching workflows, and focus is on the estimation of geological properties such as facies location, porosity, and permeability fields. Structural uncertainties can have significant effects on the bulk reservoir volume, well planning, and predictions of future production. In this paper, we consider an integrated reservoir-characterization workflow for structural-uncertainty assessment and continuous updating of the structural reservoir model by assimilation of production data. We address some of the challenges linked to structural-surface updating with the ensemble Kalman filter (EnKF). An ensemble of reservoir models, expressing explicitly the uncertainty resulting from seismic interpretation and time-to-depth conversion, is created. The top and bottom reservoir-horizon uncertainties are considered as a parameter for assisted history matching and are updated by sequential assimilation of production data using the EnKF. To avoid modifications in the grid architecture and thus to ensure a fixed dimension of the state vector, an elastic-grid approach is proposed. The geometry of a base-case simulation grid is deformed to match the realizations of the top and bottom reservoir horizons. The method is applied to a synthetic example, and promising results are obtained. The result is an ensemble of history-matched structural models with reduced and quantified uncertainty. The updated ensemble of structures provides a more reliable characterization of the reservoir architecture and a better estimate of the field oil in place.


Sign in / Sign up

Export Citation Format

Share Document