Updating Stochastic Reservoir Models With New Production Data

2003 ◽  
Author(s):  
D.H. Fenwick ◽  
F. Roggero
2001 ◽  
Vol 7 (S) ◽  
pp. S65-S73 ◽  
Author(s):  
Dean S. Oliver ◽  
Albert C. Reynolds ◽  
Zhuoxin Bi ◽  
Yafes Abacioglu

SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 464-479 ◽  
Author(s):  
B. Todd Hoffman ◽  
Jef K. Caers ◽  
Xian-Huan Wen ◽  
Sebastien B. Strebelle

Summary This paper presents an innovative methodology to integrate prior geologic information, well-log data, seismic data, and production data into a consistent 3D reservoir model. Furthermore, the method is applied to a real channel reservoir from the African coast. The methodology relies on the probability-perturbation method (PPM). Perturbing probabilities rather than actual petrophysical properties guarantees that the conceptual geologic model is maintained and that any history-matching-related artifacts are avoided. Creating reservoir models that match all types of data are likely to have more prediction power than methods in which some data are not honored. The first part of the paper reviews the details of the PPM, and the next part of this paper describes the additional work that is required to history-match real reservoirs using this method. Then, a geological description of the reservoir case study is provided, and the procedure to build 3D reservoir models that are only conditioned to the static data is covered. Because of the character of the field, the channels are modeled with a multiple-point geostatistical method. The channel locations are perturbed in a manner such that the oil, water, and gas rates from the reservoir more accurately match the rates observed in the field. Two different geologic scenarios are used, and multiple history-matched models are generated for each scenario. The reservoir has been producing for approximately 5 years, but the models are matched only to the first 3 years of production. Afterward, to check predictive power, the matched models are run for the last 1½ years, and the results compare favorably with the field data. Introduction Reservoir models are constructed to better understand reservoir behavior and to better predict reservoir response. Economic decisions are often based on the predictions from reservoir models; therefore, such predictions need to be as accurate as possible. To achieve this goal, the reservoir model should honor all sources of data, including well-log, seismic, geologic information, and dynamic (production rate and pressure) data. Incorporating dynamic data into the reservoir model is generally known as history matching. History matching is difficult because it poses a nonlinear inverse problem in the sense that the relationship between the reservoir model parameters and the dynamic data is highly nonlinear and multiple solutions are avail- able. Therefore, history matching is often done with a trial-and-error method. In real-world applications of history matching, reservoir engineers manually modify an initial model provided by geoscientists until the production data are matched. The initial model is built based on geological and seismic data. While attempts are usually made to honor these other data as much as possible, often the history-matched models are unrealistic from a geological (and geophysical) point of view. For example, permeability is often altered to increase or decrease flow in areas where a mismatch is observed; however, the permeability alterations usually come in the form of box-shaped or pipe-shaped geometries centered around wells or between wells and tend to be devoid of any geologica. considerations. The primary focus lies in obtaining a history match.


2020 ◽  
Vol 496 (1) ◽  
pp. 199-207 ◽  
Author(s):  
Tor Anders Knai ◽  
Guillaume Lescoffit

AbstractFaults are known to affect the way that fluids can flow in clastic oil and gas reservoirs. Fault barriers either stop fluids from passing across or they restrict and direct the fluid flow, creating static or dynamic reservoir compartments. Representing the effect of these barriers in reservoir models is key to establishing optimal plans for reservoir drainage, field development and production.Fault property modelling is challenging, however, as observations of faults in nature show a rapid and unpredictable variation in fault rock content and architecture. Fault representation in reservoir models will necessarily be a simplification, and it is important that the uncertainty ranges are captured in the input parameters. History matching also requires flexibility in order to handle a wide variety of data and observations.The Juxtaposition Table Method is a new technique that efficiently handles all relevant geological and production data in fault property modelling. The method provides a common interface that is easy to relate to for all petroleum technology disciplines, and allows a close cooperation between the geologist and reservoir engineer in the process of matching the reservoir model to observed production behaviour. Consequently, the method is well suited to handling fault property modelling in the complete life cycle of oil and gas fields, starting with geological predictions and incorporating knowledge of dynamic reservoir behaviour as production data become available.


2000 ◽  
Vol 3 (01) ◽  
pp. 74-79 ◽  
Author(s):  
Nanqun He ◽  
Dean S. Oliver ◽  
Albert C. Reynolds

Summary Generating realizations of reservoir permeability and porosity fields that are conditional to static and dynamic data are difficult. The constraints imposed by dynamic data are typically nonlinear and the relationship between the observed data and the petrophysical parameters is given by a flow simulator which is expensive to run. In addition, spatial organization of real rock properties is quite complex. Thus, most attempts at conditioning reservoir properties to dynamic data have either approximated the relationship between data and parameters so that complex geologic models could be used, or have used simplified spatial models with actual production data. In this paper, we describe a multistep procedure for efficiently generating realizations of reservoir properties that honor dynamic data from complex stochastic models. First, we generate a realization of the rock properties that is conditioned to static data, but not to the pressure data. Second, we generate a realization of the production data (i.e., add random errors to the production data). Third, we find the property field that is as close as possible to the uncalibrated realization and also honors the realization of the production data. The ensemble of realizations generated by this procedure often provides a good empirical approximation to the posteriori probability density function for reservoir models and can be used for Monte Carlo inference. We apply the above procedure to the problem of conditioning a three-dimensional stochastic model to data from two well tests. The real-field example contains two facies. Permeabilities within each facies were generated using a "cloud transform" that honored the observed scatter in the crossplot of permeability and porosity. We cut a volume, containing both test wells, from the full-field model, then scaled it up to about 9,000 cells before calibrating to pressure data. Although the well-test data were of poor quality, the data provided information to modify the permeabilities within the regions of investigations and on the overall permeability average. Introduction The problem of generating plausible reservoir models that are conditional to dynamic or production-type data has been an active area of research for several years. Existing studies can be classified by the way in which they approach three key aspects of the problem:Complexity of the stochastic geologic or petrophysical model.Method of computing pressure response from a reservoir model.Attention to the problem of sampling realizations from the a posteriori probability density function. Most researchers have worked with simple models (e.g., characterized by a variogram), an effective well-test permeability instead of a flow simulator, and largely ignored the problem of sampling. Other, more sophisticated examples include the use of a complex stochastic geologic model (channels), and simulated annealing to sample from the a posteriori probability distribution function (PDF), but an effective well-test permeability instead of pressure data (and a simulator) for conditioning.1 The works by Oliver2 and by Chu et al.3 provide other examples. In these cases, a flow simulator was used for conditioning but the geology was relatively simple and realizations were generated using a linearization approximation around the maximum a posteriori model. Landa4 treated the problem of conditioning two-dimensional channels, but chose a simple model that could be described by a few parameters. A large part of our effort has gone into ensuring that the ensemble of realizations that we generated would be representative of the uncertainty in the reservoir properties. In order to do this rigorously, we have used the actual pressure data but have had to limit ourselves to Gaussian random fields and to fairly small synthetic models. We recently applied Markov chain Monte Carlo (MCMC) methods5 to generate an ensemble of realizations because we believe they provide the best framework for ensuring that we obtain a representative set of realizations suitable for making economic decisions. The principal advantage of MCMC is that it provides a method for sampling realizations from complicated probability distributions such as the distributions of reservoirs conditional to production data. The method consists of a proposal of a new realization, and a decision as to whether to accept the proposed realization, or to again accept the current realization. The "chain" refers to the sequence of accepted realizations and "Monte Carlo" refers to the stochastic aspect in the proposal acceptance steps. Unfortunately, it appears to be impractical to use MCMC methods for generating realizations that are conditional to production data. If realizations are proposed from a relatively simple probability density function (e.g., multivariate Gaussian), then most realizations are rejected and the method is inefficient. Alternatively, if realizations are proposed from a PDF that is complicated but close to the desired PDF, the Metropolis-Hastings criterion, which involves the ratio of the probability of proposing the proposed realization to the probability of proposing the current realization, is difficult to evaluate. Oliver et al.6 proposed a methodology for incorporating production data that followed the second approach but ignored the Metropolis-Hastings criterion, instead accepting every realization. We showed that the method is rigorously valid for conditioning Gaussian random fields to linear data (i.e., weighted averages of model variables) and is easily adapted to more complex geostatistical models and types of data. Although the method is then not rigorously correct, we have shown that the distribution of realizations is good for simple, but highly nonlinear problems. The realizations generated using this methodology still honor all the data—the ensemble of realizations is, however, not a perfect representation of the true distribution even as the number of realizations becomes very large.


Sign in / Sign up

Export Citation Format

Share Document