Statistical Methods for Estimating Petroleum Resources
Latest Publications


TOTAL DOCUMENTS

8
(FIVE YEARS 0)

H-INDEX

0
(FIVE YEARS 0)

Published By Oxford University Press

9780195331905, 9780197562550

Author(s):  
P.J. Lee

A basin or subsurface study, which is the first step in petroleum resource evaluation, requires the following types of data: • Reservoir data—pool area, net pay, porosity, water saturation, oil or gas formation volume factor, in-place volume, recoverable oil volume or marketable gas volume, temperature, pressure, density, recovery factors, gas composition, discovery date, and other parameters (refer to Lee et al., 1999, Section 3.1.2). • Well data—surface and bottom well locations; spud and completion dates; well elevation; history of status; formation drill and true depths; lithology; drill stem tests; core, gas, and fluid analyses; and mechanical logs. • Geochemical data—types of source rocks, burial history, and maturation history. • Geophysical data—prospect maps and seismic sections. Well data are essential when we construct structural contour, isopach, lithofacies, porosity, and other types of maps. Geophysical data assist us when we compile number-of-prospect distributions and they provide information for risk analysis.


Author(s):  
P.J. Lee

The procedure and steps of petroleum resource assessment involve a learning process that is characterized by an interactive loop between geological and statistical models and their feedback mechanisms. Geological models represent natural populations and are the basic units for petroleum resource evaluation. Statistical models include the superpopulation, finite population, and discovery process models that may be used for estimating the distributions for pool size and number of pools, and can be estimated from somewhat biased exploration data. Methods for assessing petroleum resources have been developed using different geological perspectives. Each of them can be applied to a specific case. When we consider using a particular method, the following aspects should be examined: • Types of data required—Some methods can only incorporate certain types of data; others can incorporate all data that are available. • Assumptions required—We must study what specific assumptions should be made and what role they play in the process of estimation. • Types of estimates—What types of estimates does the method provide (aggregate estimates vs. pool-size estimates)? Do the types of estimates fulfill our needs for economic analysis? • Feedback mechanisms—What types of feedback mechanism does the method offer? PETRIMES is based on a probabilistic framework that uses superpopulation and finite population concepts, discovery process models, and the optional use of lognormal distributions. The reasoning behind the application of discovery process models is that they offer the only known way to incorporate petroleum assessment fundamentals (i.e., realism) into the estimates. PETRIMES requires an exploration time series as basic input and can be applied to both mature and frontier petroleum resource evaluations.


Author(s):  
P.J. Lee

The initial step in the evaluation of any petroleum resource is the identification of an appropriate geological population that can be delineated through subsurface study or basin analysis. A geological population represents a natural population and possesses a group of pools and/or prospects sharing common petroleum habitats. A natural population can be a single sedimentation model, structural style, type of trapping mechanism or geometry, tectonic cycle, stratigraphic sequence, or any combination of these criteria. Reasons for adopting these criteria in the definition of a geological model are the following: • The geological population will be defined clearly and its associated resource can readily be estimated. • Geologists can adopt known play data for future comparative geological studies. • Geological variables of a natural population can be described by probability distributions (e.g., the lognormal distribution). Statistical concepts such as the superpopulation concept can be applied to geological models so that, for specific plays, an estimate of undiscovered pool sizes can be made. Figure 2.1 illustrates various sedimentary environments (tidal flat, lagoon, beach, and patch reef) that can be used as geological models in resource evaluation. Each of these models has its own distinguishing characteristics of source, reservoir, trapping mechanism, burial and thermal history of source beds, and migration pathway. In resource evaluation, to ensure the integrity of statistical analysis, each of these should be treated as a separate, natural population. Therefore, the logical steps in describing a play are (1) identify a single sedimentation model and (2) examine subsequent geological processes. Geological processes such as faulting, erosion, folding, diagenesis, biodegradation, thermal history of source rocks, and migration history might provide a basis for further subdivisions of the model. In some cases, two or more populations might be considered mistakenly as a single population because of a lack of understanding of the subsurface geology. If the resulting mixed population were to have two or more modes in its distribution, this could have an impact on resource evaluation results.


Author(s):  
P.J. Lee

A conceptual play has not yet been proved through exploration and can only be postulated from geological information. An immature play contains several discoveries, but not enough for discovery process models (described in Chapter 3) to be applied. The amount of data available for evaluating a conceptual play can be highly variable. Therefore, the evaluation methods used are related to the amount and types of data available, some of which are listed in Table 5.1. Detailed descriptions of these methods are beyond the scope of this book. However, an overview of these and other methods will be presented in Chapter 7. This chapter deals with the application of numerical methods to conceptual or immature plays. For immature plays, discoveries can be used to validate the estimates obtained. In this chapter, the Beaverhill Lake play and a play from the East Coast of Canada are examined. A play consists of a number of pools and/or prospects that may or may not contain hydrocarbons. Therefore, associated with each prospect is an exploration risk that measures the probability of a prospect being a pool. Estimating exploration risk in petroleum resource evaluation is important. Methods for quantifying exploration risks are described later. Geological factors that determine the accumulation of hydrocarbons include the presence of closure and of reservoir facies, as well as adequate seal, porosity, timing, source, migration, preservation, and recovery. For a specific play, only a few of these factors are recognized as critical to the amount of final accumulation. Consequently, if a prospect located within a sandstone play, for example, were tested, it might prove unsuccessful for any of the following reasons: lack of closure, unfavorable reservoir facies, lack of adequate source or migration path, and/or absence of cap rock. The frequency of occurrence of a geological factor can be measured from marginal probabilities. For example, if the marginal probability for the presence-of-closure factor is 0.9, there is a 90% chance that prospects drilled will have adequate closure. For a prospect to be a pool, the simultaneous presence of all the geological factors in the prospect is necessary. This requirement leads us to exploration risk analysis.


Author(s):  
P.J. Lee

In Chapter 3 we discussed the concepts, functions, and applications of the two discovery process models LDSCV and NDSCV. In this chapter we will use various simulated populations to validate these two models to examine whether their performance meets our expectations. In addition, lognormal assumptions are applied to Weibull and Pareto populations to assess the impact on petroleum evaluation as a result of incorrect specification of probability distributions. A mixed population of two lognormal populations and a mixed population of lognormal, Weibull, and Pareto populations were generated to test the impact of mixed populations on assessment quality. NDSCV was then applied to all these data sets to validate the performance of the models. Finally, justifications for choosing a lognormal distribution in petroleum assessments are discussed in detail. Known populations were created as follows: A finite population was generated from a random sample of size 300 (N = 300) drawn from the lognormal, Pareto, and Weibull superpopulations. For the lognormal case, a population with μ = 0 and σ2 = 5 was assumed. The truncated and shifted Pareto population with shape factor θ = 0.4, maximum pool size = 4000, and minimum pool size = 1 was created. The Weibull population with λ = 20, θ = 1.0 was generated for the current study. The first mixed population was created by mixing two lognormal populations. Parameters for population I are μ = 0, σ2 = 3, and N1 = 150. For population II, μ = 3.0, σ2 = 3.2, and N2 = 150. The second mixed population was generated by mixing lognormal (N1 = 100), Pareto (N2 = 100), and Weibull (N3 = 100) populations with a total of 300 pools. In addition, a gamma distribution was also used for reference. The lognormal distribution is J-shaped if an arithmetic scale is used for the horizontal axis, but it shows an almost symmetrical pattern when a logarithmic scale is applied.


Author(s):  
P.J. Lee

Petroleum resource evaluations have been performed by geologists, geophysicists, geochemists, engineers, and statisticians for many decades in an attempt to estimate resource potential in a given region. Because of differences in the geological and statistical methods used for assessment, and the amount and type of data available, resource evaluations often vary. Accounts of various methods have been compiled by Haun (1975), Grenon (1979), Masters (1985), Rice (1986), and Mast et al. (1989). In addition, Lee and Gill (1999) used the Michigan reef play data to evaluate the merits of the log-geometric method of the U.S. Geological Survey (USGS); the PETRIMES method developed by the Geological Survey of Canada (GSC); the Arps and Roberts method; Bickel, Nair, and Wang’s nonparametric finite population method; Kaufman’s anchored method; and the geo-anchored method of Chen and Sinding–Larson. Information required for petroleum resource evaluation includes all available reservoir data and data derived from the drilling of exploratory and development wells. Other essential geological information comes from regional geological, geophysical, and geochemical studies, as well as from work carried out in analogous basins. Any comprehensive resource evaluation procedure must combine raw data with information acquired from regional analysis and comparative studies. The Hydrocarbon Assessment System Processor (HASP) has been used to blend available exploration data with previously gathered information (Energy, Mines and Resources Canada, 1977; Roy, 1979). HASP expresses combinations of exploration data and expert judgment as probability distributions for specific population attributes (such as pool area, net pay, porosity). Since this procedure was first implemented, demands on evaluation capability have steadily increased as evaluation results were increasingly applied to economic analyses. Traditional methods could no longer meet the new demands. A probabilistic formulation for HASP became necessary and was established by Lee and Wang (1983b). This formulation led to the development of the Petroleum Exploration and Resource Evaluation System, PETRIMES (Lee, 1993a, c, d; Lee and Tzeng, 1993; Lee and Wang, 1983a, b, 1984, 1985, 1986, 1987, 1990). Since then, new capabilities and features have been added to the evaluation system (Lee, 1997, 1998). A Windows version was also created (Lee et al., 1999).


Author(s):  
P.J. Lee

Resource evaluation procedures have evolved along distinct paths, involving a variety of statistical, geochemical, and geological approaches because of different types of data and various assumptions that have driven their development. Many methods have been developed so far, but only those methods that have been published and have significantly influenced subsequent development of evaluation procedures are discussed here. The purpose of this chapter is to present an overview of the principles of these methods and identify the direction of future research in this area. Methods discussed include the following: • Geological approach—volumetric yield by analogy, basin classification • Geochemical approach—petroleum systems, burial and thermal history • Statistical approach (methods that were not discussed in previous chapters are discussed here) • Finite population methods—Arps and Roberts’, Bickel’s, Kaufman’s anchored, and Chen and Sinding–Larsen’s geoanchored • Superpopulation methods—USGS log-geometric, Zipf’s law, creaming, and Long’s • The regression method • The fractal method Specific data and assumptions can be applied to each of these methods. Some of the assumptions can be validated by the data whereas others cannot. These methods have their own merits and disadvantages. The geological approach has been used for the past several decades and is a qualitative method. This section discusses the volumetric yield method and the basin classification method. Volumetric yield using the analogous basin method was the earliest method of petroleum resource evaluation applied to frontier basins. It requires knowledge of the volume of a basin and its characteristics (e.g., tectonic, sedimentation, thermal generation, migration, and accumulation). Based on comparative studies, geologists are able to apply a hydrocarbon yield factor per unit volume (i.e., barrels of oil/cubic unit of sediment) from one known basin to an unknown basin with similar characteristics. Thus, for conceptual basins, this provides some information about the richness of an unknown basin. The advantages are the following: 1. It is suitable for the evaluation of conceptual basins. 2. It is easy to understand. 3. It combines geochemical data and/or experience from mature basins.


Author(s):  
P.J. Lee

A key objective in petroleum resource evaluation is to estimate oil and gas pool size (or field size) or oil and gas joint probability distributions for a particular population or play. The pool-size distribution, together with the number-of-pools distribution in a play can then be used to predict quantities such as the total remaining potential, the individual pool sizes, and the sizes of the largest undiscovered pools. These resource estimates provide the fundamental information upon which petroleum economic analyses and the planning of exploration strategies can be based. The estimation of these types of pool-size distributions is a difficult task, however, because of the inherent sampling bias associated with exploration data. In many plays, larger pools tend to be discovered during the earlier phases of exploration. In addition, a combination of attributes, such as reservoir depth and distance to transportation center, often influences the order of discovery. Thus exploration data cannot be considered a random sample from the population. As stated by Drew et al. (1988), the form and specific parameters of the parent field-size distribution cannot be inferred with any confidence from the observed distribution. The biased nature of discovery data resulting from selective exploration decision making must be taken into account when making predictions about undiscovered oil and gas resources in a play. If this problem can be overcome, then the estimation of population mean, variance, and correlation among variables can be achieved. The objective of this chapter is to explain the characterization of the discovery process by statistical formulation. To account for sampling bias, Kaufman et al. (1975) and Barouch and Kaufman (1977) used the successive sampling process of the superpopulation probabilistic model (discovery process model) to estimate the mean and variance of a given play. Here we shall discuss how to use superpopulation probabilistic models to estimate pool-size distribution. The models to be discussed include the lognormal (LDSCV), nonparametric (NDSCV), lognormal/nonparametric–Poisson (BDSCV), and the bivariate lognormal, multivariate (MDSCV) discovery process methods. Their background, applications, and limitations will be illustrated by using play data sets from the Western Canada Sedimentary Basin as well as simulated populations.


Sign in / Sign up

Export Citation Format

Share Document