Conditioning Stochastic Reservoir Models to Well-Test Data

2000 ◽  
Vol 3 (01) ◽  
pp. 74-79 ◽  
Author(s):  
Nanqun He ◽  
Dean S. Oliver ◽  
Albert C. Reynolds

Summary Generating realizations of reservoir permeability and porosity fields that are conditional to static and dynamic data are difficult. The constraints imposed by dynamic data are typically nonlinear and the relationship between the observed data and the petrophysical parameters is given by a flow simulator which is expensive to run. In addition, spatial organization of real rock properties is quite complex. Thus, most attempts at conditioning reservoir properties to dynamic data have either approximated the relationship between data and parameters so that complex geologic models could be used, or have used simplified spatial models with actual production data. In this paper, we describe a multistep procedure for efficiently generating realizations of reservoir properties that honor dynamic data from complex stochastic models. First, we generate a realization of the rock properties that is conditioned to static data, but not to the pressure data. Second, we generate a realization of the production data (i.e., add random errors to the production data). Third, we find the property field that is as close as possible to the uncalibrated realization and also honors the realization of the production data. The ensemble of realizations generated by this procedure often provides a good empirical approximation to the posteriori probability density function for reservoir models and can be used for Monte Carlo inference. We apply the above procedure to the problem of conditioning a three-dimensional stochastic model to data from two well tests. The real-field example contains two facies. Permeabilities within each facies were generated using a "cloud transform" that honored the observed scatter in the crossplot of permeability and porosity. We cut a volume, containing both test wells, from the full-field model, then scaled it up to about 9,000 cells before calibrating to pressure data. Although the well-test data were of poor quality, the data provided information to modify the permeabilities within the regions of investigations and on the overall permeability average. Introduction The problem of generating plausible reservoir models that are conditional to dynamic or production-type data has been an active area of research for several years. Existing studies can be classified by the way in which they approach three key aspects of the problem:Complexity of the stochastic geologic or petrophysical model.Method of computing pressure response from a reservoir model.Attention to the problem of sampling realizations from the a posteriori probability density function. Most researchers have worked with simple models (e.g., characterized by a variogram), an effective well-test permeability instead of a flow simulator, and largely ignored the problem of sampling. Other, more sophisticated examples include the use of a complex stochastic geologic model (channels), and simulated annealing to sample from the a posteriori probability distribution function (PDF), but an effective well-test permeability instead of pressure data (and a simulator) for conditioning.1 The works by Oliver2 and by Chu et al.3 provide other examples. In these cases, a flow simulator was used for conditioning but the geology was relatively simple and realizations were generated using a linearization approximation around the maximum a posteriori model. Landa4 treated the problem of conditioning two-dimensional channels, but chose a simple model that could be described by a few parameters. A large part of our effort has gone into ensuring that the ensemble of realizations that we generated would be representative of the uncertainty in the reservoir properties. In order to do this rigorously, we have used the actual pressure data but have had to limit ourselves to Gaussian random fields and to fairly small synthetic models. We recently applied Markov chain Monte Carlo (MCMC) methods5 to generate an ensemble of realizations because we believe they provide the best framework for ensuring that we obtain a representative set of realizations suitable for making economic decisions. The principal advantage of MCMC is that it provides a method for sampling realizations from complicated probability distributions such as the distributions of reservoirs conditional to production data. The method consists of a proposal of a new realization, and a decision as to whether to accept the proposed realization, or to again accept the current realization. The "chain" refers to the sequence of accepted realizations and "Monte Carlo" refers to the stochastic aspect in the proposal acceptance steps. Unfortunately, it appears to be impractical to use MCMC methods for generating realizations that are conditional to production data. If realizations are proposed from a relatively simple probability density function (e.g., multivariate Gaussian), then most realizations are rejected and the method is inefficient. Alternatively, if realizations are proposed from a PDF that is complicated but close to the desired PDF, the Metropolis-Hastings criterion, which involves the ratio of the probability of proposing the proposed realization to the probability of proposing the current realization, is difficult to evaluate. Oliver et al.6 proposed a methodology for incorporating production data that followed the second approach but ignored the Metropolis-Hastings criterion, instead accepting every realization. We showed that the method is rigorously valid for conditioning Gaussian random fields to linear data (i.e., weighted averages of model variables) and is easily adapted to more complex geostatistical models and types of data. Although the method is then not rigorously correct, we have shown that the distribution of realizations is good for simple, but highly nonlinear problems. The realizations generated using this methodology still honor all the data—the ensemble of realizations is, however, not a perfect representation of the true distribution even as the number of realizations becomes very large.

SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 464-479 ◽  
Author(s):  
B. Todd Hoffman ◽  
Jef K. Caers ◽  
Xian-Huan Wen ◽  
Sebastien B. Strebelle

Summary This paper presents an innovative methodology to integrate prior geologic information, well-log data, seismic data, and production data into a consistent 3D reservoir model. Furthermore, the method is applied to a real channel reservoir from the African coast. The methodology relies on the probability-perturbation method (PPM). Perturbing probabilities rather than actual petrophysical properties guarantees that the conceptual geologic model is maintained and that any history-matching-related artifacts are avoided. Creating reservoir models that match all types of data are likely to have more prediction power than methods in which some data are not honored. The first part of the paper reviews the details of the PPM, and the next part of this paper describes the additional work that is required to history-match real reservoirs using this method. Then, a geological description of the reservoir case study is provided, and the procedure to build 3D reservoir models that are only conditioned to the static data is covered. Because of the character of the field, the channels are modeled with a multiple-point geostatistical method. The channel locations are perturbed in a manner such that the oil, water, and gas rates from the reservoir more accurately match the rates observed in the field. Two different geologic scenarios are used, and multiple history-matched models are generated for each scenario. The reservoir has been producing for approximately 5 years, but the models are matched only to the first 3 years of production. Afterward, to check predictive power, the matched models are run for the last 1½ years, and the results compare favorably with the field data. Introduction Reservoir models are constructed to better understand reservoir behavior and to better predict reservoir response. Economic decisions are often based on the predictions from reservoir models; therefore, such predictions need to be as accurate as possible. To achieve this goal, the reservoir model should honor all sources of data, including well-log, seismic, geologic information, and dynamic (production rate and pressure) data. Incorporating dynamic data into the reservoir model is generally known as history matching. History matching is difficult because it poses a nonlinear inverse problem in the sense that the relationship between the reservoir model parameters and the dynamic data is highly nonlinear and multiple solutions are avail- able. Therefore, history matching is often done with a trial-and-error method. In real-world applications of history matching, reservoir engineers manually modify an initial model provided by geoscientists until the production data are matched. The initial model is built based on geological and seismic data. While attempts are usually made to honor these other data as much as possible, often the history-matched models are unrealistic from a geological (and geophysical) point of view. For example, permeability is often altered to increase or decrease flow in areas where a mismatch is observed; however, the permeability alterations usually come in the form of box-shaped or pipe-shaped geometries centered around wells or between wells and tend to be devoid of any geologica. considerations. The primary focus lies in obtaining a history match.


2003 ◽  
Author(s):  
Asnul Bahar ◽  
Harun Ates ◽  
Maged H. Al-Deeb ◽  
Salem E. Salem ◽  
Hussein Badaam ◽  
...  

2013 ◽  
Vol 748 ◽  
pp. 614-618
Author(s):  
Bao Yi Jiang ◽  
Zhi Ping Li ◽  
Cheng Wen Zhang ◽  
Xi Gang Wang

Numerical reservoir models are constructed from limited available static and dynamic data, and history matching is a process of changing model parameters to find a set of values that will yield a reservoir simulation prediction of data that matches the observed historical production data. To minimize the objective function involved in the history matching procedure, we need to apply the optimization algorithms. This paper is based on the optimization algorithms used in automatic history matching. Several optimization algorithms will be compared in this paper.


2009 ◽  
Author(s):  
Gaoming Li ◽  
Mei Han ◽  
Raj Banerjee ◽  
Albert Coburn Reynolds

2002 ◽  
Vol 5 (03) ◽  
pp. 255-265 ◽  
Author(s):  
X.-H. Wen ◽  
T.T. Tran ◽  
R.A. Behrens ◽  
J.J. Gomez-Hernandez

Summary The stochastic inversion of spatial distribution of lithofacies from multiphase production data is a difficult problem. This is true even for the simplest case, addressed here, of a sand/shale distribution and under the assumption that reservoir properties are constant within each lithofacies. Two geostatistically based inverse techniques, sequential self-calibration (SSC) and GeoMorphing (GM), are extended for such purposes and then compared with synthetic reference fields. The extension of both techniques is based on the one-to-one relationship existing between lithofacies and Gaussian deviates in truncated Gaussian simulation. Both techniques attempt to modify the field of Gaussian deviates while maintaining the truncation threshold field through an optimization procedure. Maintaining a fixed threshold field, which has been computed previously on the basis of prior lithofacies proportion data, well data, and other static soft data, guarantees preservation of the initial geostatistical structure. Comparisons of the two techniques using 2D and 3D synthetic data show that the SSC is very efficient in producing sand/shale realizations matching production data and reproducing the large-scale patterns displayed in the reference fields, although it has difficulty in reproducing small-scale features. GM is a simpler algorithm than SSC, but it is computationally more intensive and has difficulty in matching complex production data. Better results could be obtained with a combination of the two techniques in which SSC is used to generate realizations identifying large-scale features; then, these realizations could be used as input to GM for a final update to match small-scale details. Introduction Reliable predictions of future reservoir performance require reservoir models that incorporate all available relevant information. Geostatistical methods are widely used and well suited to construct reservoir models of porosity and permeability honoring static data, such as core data, well-log data, seismic data, and geological conceptual data. Dynamic production data, such as production rate, pressure, water cut, and gas/oil ratio (GOR), have been largely overlooked for constraining geostatistical models because of the complication and difficulty of integrating them. Traditional geostatistical methods for integrating static data are not well suited for integrating dynamic data because dynamic data are nonlinearly related to reservoir properties through the flow equations. Typically, an inverse technique is needed for such integration, in which the flow equations must be solved many times within a nonlinear optimization procedure. In recent years, a number of inverse techniques have been developed and shown capable of preconstraining geostatistical models before they go to the manual history matching phase. Ref. 1 provides a review of these inverse techniques. Two geostatistically based approaches that have shown great potential for the integration of dynamic data are SSC and GM. The SSC method iteratively perturbs the given reservoir model at each gridblock to match the production data while preserving the geostatistical features and static hard/soft data conditioning.2–6 The perturbation is computed through an optimization procedure after a parameterization of the optimization problem with a reduced number of parameters that requires the computation of sensitivity coefficients. The reduced number of parameters to optimize and a fast calculation of the sensitivity coefficients make the inversion computationally feasible. Multiple realizations of the reservoir model can be produced, from which uncertainty can be assessed. Applications of the SSC method to invert permeability distribution from single-phase and multiphase production data have shown their efficiency and robustness.3–6 In this paper, we extend the SSC method to invert lithofacies distributions from production data within the framework of truncated Gaussian simulation. We limit ourselves to sand/shale reservoirs in which permeability is assumed constant within each facies. GM is an evolution and extension of the Gradual Deformation method.7–9 This method generates realizations of reservoir models by an iterative procedure in which, at each iteration, unconditional realizations are linearly and optimally combined into a new realization with a better reproduction of the production data than any other members of the linear combination. Because the linear combination of a few realizations depends only on a few parameters, the optimization procedure is very easy to implement. Our GM algorithm follows the modification of the gradual deformation algorithm by Ying and Gómez-Hernández10 to honor the well data while preserving the permeability variogram. Our modification here is aimed at inverting a lithofacies distribution from production data within the framework of truncated Gaussian simulation. Comparisons of these two methods in generating multiple geostatistical sand/shale reservoir models that honor dynamic production data are made by using both 2D and 3D synthetic data sets. The comparison of the results against the reference models provides direct assessment of the two methods. A thorough comparison of the two methods is made in terms of reproduction of reservoir spatial patterns, matching of production data, implementation issues, feasibility, CPU time, and generality. We also discuss briefly the possible combination of the strength of the two methods to achieve better, more efficient integration of production data. In the following sections, we first recall the methodology of truncated Gaussian simulation to construct a categorical type of reservoir model; then, the SSC and GM methods are presented under the framework of truncated Gaussian simulation to invert lithofacies distributions. Applications of the two methods to invert sand/shale distributions in 2D and 3D reservoir models are made using synthetic data sets, with emphasis on the comparisons of the strengths and weaknesses of the two methods. The production data considered in this paper are fractional-flow rates (water cut) at production wells and water-saturation spatial distribution at a given time in two-phase-flow (oil/water) reservoirs.


2010 ◽  
Vol 13 (03) ◽  
pp. 496-508 ◽  
Author(s):  
Gaoming Li ◽  
Mei Han ◽  
Raj Banerjee ◽  
Albert C. Reynolds

Sign in / Sign up

Export Citation Format

Share Document