scholarly journals Constructing Reservoir Flow Simulator Proxies Using Genetic Programming for History Matching and Production Forecast Uncertainty Analysis

2008 ◽  
Vol 2008 ◽  
pp. 1-13 ◽  
Author(s):  
Tina Yu ◽  
Dave Wilkinson ◽  
Alexandre Castellini

Reservoir modeling is a critical step in the planning and development of oil fields. Before a reservoir model can be accepted for forecasting future production, the model has to be updated with historical production data. This process is called history matching. History matching requires computer flow simulation, which is very time-consuming. As a result, only a small number of simulation runs are conducted and the history-matching results are normally unsatisfactory. This is particularly evident when the reservoir has a long production history and the quality of production data is poor. The inadequacy of the history-matching results frequently leads to high uncertainty of production forecasting. To enhance the quality of the history-matching results and improve the confidence of production forecasts, we introduce a methodology using genetic programming (GP) to construct proxies for reservoir simulators. Acting as surrogates for the computer simulators, the “cheap” GP proxies can evaluate a large number (millions) of reservoir models within a very short time frame. With such a large sampling size, the reservoir history-matching results are more informative and the production forecasts are more reliable than those based on a small number of simulation models. We have developed a workflow which incorporates the two GP proxies into the history matching and production forecast process. Additionally, we conducted a case study to demonstrate the effectiveness of this approach. The study has revealed useful reservoir information and delivered more reliable production forecasts. All of these were accomplished without introducing new computer simulation runs.

2021 ◽  
Author(s):  
Mokhles Mezghani ◽  
Mustafa AlIbrahim ◽  
Majdi Baddourah

Abstract Reservoir simulation is a key tool for predicting the dynamic behavior of the reservoir and optimizing its development. Fine scale CPU demanding simulation grids are necessary to improve the accuracy of the simulation results. We propose a hybrid modeling approach to minimize the weight of the full physics model by dynamically building and updating an artificial intelligence (AI) based model. The AI model can be used to quickly mimic the full physics (FP) model. The methodology that we propose consists of starting with running the FP model, an associated AI model is systematically updated using the newly performed FP runs. Once the mismatch between the two models is below a predefined cutoff the FP model is switch off and only the AI model is used. The FP model is switched on at the end of the exercise either to confirm the AI model decision and stop the study or to reject this decision (high mismatch between FP and AI model) and upgrade the AI model. The proposed workflow was applied to a synthetic reservoir model, where the objective is to match the average reservoir pressure. For this study, to better account for reservoir heterogeneity, fine scale simulation grid (approximately 50 million cells) is necessary to improve the accuracy of the reservoir simulation results. Reservoir simulation using FP model and 1024 CPUs requires approximately 14 hours. During this history matching exercise, six parameters have been selected to be part of the optimization loop. Therefore, a Latin Hypercube Sampling (LHS) using seven FP runs is used to initiate the hybrid approach and build the first AI model. During history matching, only the AI model is used. At the convergence of the optimization loop, a final FP model run is performed either to confirm the convergence for the FP model or to re iterate the same approach starting from the LHS around the converged solution. The following AI model will be updated using all the FP simulations done in the study. This approach allows the achievement of the history matching with very acceptable quality match, however with much less computational resources and CPU time. CPU intensive, multimillion-cell simulation models are commonly utilized in reservoir development. Completing a reservoir study in acceptable timeframe is a real challenge for such a situation. The development of new concepts/techniques is a real need to successfully complete a reservoir study. The hybrid approach that we are proposing is showing very promising results to handle such a challenge.


2015 ◽  
Vol 18 (04) ◽  
pp. 481-494 ◽  
Author(s):  
Siavash Nejadi ◽  
Juliana Y. Leung ◽  
Japan J. Trivedi ◽  
Claudio Virues

Summary Advancements in horizontal-well drilling and multistage hydraulic fracturing have enabled economically viable gas production from tight formations. Reservoir-simulation models play an important role in the production forecasting and field-development planning. To enhance their predictive capabilities and to capture the uncertainties in model parameters, one should calibrate stochastic reservoir models to both geologic and flow observations. In this paper, a novel approach to characterization and history matching of hydrocarbon production from a hydraulic-fractured shale is presented. This new methodology includes generating multiple discrete-fracture-network (DFN) models, upscaling the models for numerical multiphase-flow simulation, and updating the DFN-model parameters with dynamic-flow responses. First, measurements from hydraulic-fracture treatment, petrophysical interpretation, and in-situ stress data are used to estimate the initial probability distribution of hydraulic-fracture and induced-microfracture parameters, and multiple initial DFN models are generated. Next, the DFN models are upscaled into an equivalent continuum dual-porosity model with analytical techniques. The upscaled models are subjected to the flow simulation, and their production performances are compared with the actual responses. Finally, an assisted-history-matching algorithm is implemented to assess the uncertainties of the DFN-model parameters. Hydraulic-fracture parameters including half-length and transmissivity are updated, and the length, transmissivity, intensity, and spatial distribution of the induced fractures are also estimated. The proposed methodology is applied to facilitate characterization of fracture parameters of a multifractured shale-gas well in the Horn River basin. Fracture parameters and stimulated reservoir volume (SRV) derived from the updated DFN models are in agreement with estimates from microseismic interpretation and rate-transient analysis. The key advantage of this integrated assisted-history-matching approach is that uncertainties in fracture parameters are represented by the multiple equally probable DFN models and their upscaled flow-simulation models, which honor the hard data and match the dynamic production history. This work highlights the significance of uncertainties in SRV and hydraulic-fracture parameters. It also provides insight into the value of microseismic data when integrated into a rigorous production-history-matching work flow.


Author(s):  
Luís Augusto Nagasaki Costa ◽  
Célio Maschio ◽  
Denis José Schiozer

History matching for naturally fractured reservoirs is challenging because of the complexity of flow behavior in the fracture-matrix combination. Calibrating these models in a history-matching procedure normally requires integration with geostatistical techniques (Big Loop, where the history matching is integrated to reservoir modeling) for proper model characterization. In problems involving complex reservoir models, it is common to apply techniques such as sensitivity analysis to evaluate and identify most influential attributes to focus the efforts on what most impact the response. Conventional Sensitivity Analysis (CSA), in which a subset of attributes is fixed at a unique value, may over-reduce the search space so that it might not be properly explored. An alternative is an Iterative Sensitivity Analysis (ISA), in which CSA is applied multiple times throughout the iterations. ISA follows three main steps: (a) CSA identifies Group i of influential attributes (i = 1, 2, 3, …, n); (b) reduce uncertainty of Group i, with other attributes with fixed values; and (c) return to step (a) and repeat the process. Conducting CSA multiple times allows the identification of influential attributes hidden by the high uncertainty of the most influential attributes. In this work, we assess three methods: Method 1 – ISA, Method 2 – CSA, and Method 3 – without sensitivity analysis, i.e., varying all uncertain attributes (larger searching space). Results showed that the number of simulation runs for Method 1 dropped 24% compared to Method 3 and 12% to Method 2 to reach a similar matching quality of acceptable models. In other words, Method 1 reached a similar quality of results with fewer simulations. Therefore, ISA can perform as good as CSA demanding fewer simulations. All three methods identified the same five most influential attributes of the initial 18. Even with many uncertain attributes, only a small percentage is responsible for most of the variability of responses. Also, their identification is essential for efficient history matching. For the case presented in this work, few fracture attributes were responsible for most of the variability of the responses.


Geofluids ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-18
Author(s):  
Jaeyoung Park ◽  
Candra Janova

This paper introduces a flow simulation-based reservoir modeling study of a two-well pad with long production history and identical completion parameters in the Midland Basin. The study includes building geologic model, history matching, well performance prediction, and finding optimum lateral well spacing in terms of oil volume and economic metrics. The reservoir model was constructed based on a geologic model, integrating well logs, and core data near the target area. Next, a sensitivity analysis was performed on the reservoir simulation model to better understand influential parameters on simulation results. The following history matching was conducted with the satisfactory quality, less than 10% of global error, and after the model calibration ranges of history matching parameters have substantially reduced. The population-based history matching algorithm provides the ensemble of the history-matched model, and the top 50 history-matched models were selected to predict the range of Estimate Ultimate Recovery (EUR), showing that P50 of oil EUR is within the acceptable range of the deterministic EUR estimates. With the best history-matched model, we investigated lateral well spacing sensitivity of the pad in terms of the maximum recovery volume and economic benefit. The results show that, given the current completion design, the well spacing tighter than the current practice in the area is less effective regarding the oil volume recovery. However, economic metrics suggest that the additional monetary value can be realized with 150% of current development assumption. The presented workflow provides a systematic approach to find the optimum lateral well spacing in terms of volume and economic metrics per one section given economic assumptions, and the workflow can be readily repeated to evaluate spacing optimization in other acreage.


Author(s):  
M. Syafwan

This paper presents a fit-for-purpose approach to mitigate zonal production data allocation uncertainty during history matching of a reservoir simulation model due to limited production logging data. To avoid propagating perforation/production zone allocation uncertainty at commingled wells into the history matched reservoir model, only well-level production data from historical periods when production was from a single zone were used to calibrate reservoir properties that determine initial volumetric. Then, during periods of the history with commingled production, average reservoir pressure measurements were integrated into the model to allocate fluid production to the target reservoir. Last, the periods constrained by dedicated well-level fluid production and average reservoir pressure were merged over the forty-eight-year history to construct a single history matched reservoir model in preparation for waterflood performance forecasting. This innovative history matching approach, which mitigates the impacts of production allocation uncertainty by using different intervals of the historical data to calibrate model saturations and model pressures, has provided a new interpretation of OOIP and current recovery factor, as well as drive mechanisms including aquifer strength and capillary pressure. Fluid allocation from the target reservoir in the history matched model is 85% lower than previously estimated. The history matched model was used as a quantitative forecasting and optimization tool to expand the recent waterflood with improved production forecast reliability. The remaining mobile oil saturation map and streamline-based waterflood diagnostics have improved understanding of injector-producer connectivity and swept pore volumes, e.g., current swept volumes are minor and well-centric with limited indication of breakthrough at adjacent producers resulting in high remaining mobile oil saturation. Accordingly, the history matched model provides a foundation to select new injection points, determine dedicated producer locations and support optimized injection strategies to improve recovery.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Aleksandr Bespalov ◽  
Anton Barchuk ◽  
Anssi Auvinen ◽  
Jaakko Nevalainen

Abstract Background Nowadays, various simulation approaches for evaluation and decision making in cancer screening can be found in the literature. This paper presents an overview of approaches used to assess screening programs for breast, lung, colorectal, prostate, and cervical cancers. Our main objectives are to describe methodological approaches and trends for different cancer sites and study populations, and to evaluate quality of cancer screening simulation studies. Methods A systematic literature search was performed in Medline, Web of Science, and Scopus databases. The search time frame was limited to 1999–2018 and 7101 studies were found. Of them, 621 studies met inclusion criteria, and 587 full-texts were retrieved, with 300 of the studies chosen for analysis. Finally, 263 full texts were used in the analysis (37 were excluded during the analysis). A descriptive and trend analysis of models was performed using a checklist created for the study. Results Currently, the most common methodological approaches in modeling cancer screening were individual-level Markov models (34% of the publications) and cohort-level Markov models (41%). The most commonly evaluated cancer types were breast (25%) and colorectal (24%) cancer. Studies on cervical cancer evaluated screening and vaccination (18%) or screening only (13%). Most studies have been conducted for North American (42%) and European (39%) populations. The number of studies with high quality scores increased over time. Conclusions Our findings suggest that future directions for cancer screening modelling include individual-level Markov models complemented by screening trial data, and further effort in model validation and data openness.


2020 ◽  
pp. 3252-3265
Author(s):  
Nagham Jasim ◽  
Sameera M. Hamd-Allah ◽  
Hazim Abass

Increasing hydrocarbon recovery from tight reservoirs is an essential goal of oil industry in the recent years. Building real dynamic simulation models and selecting and designing suitable development strategies for such reservoirs need basically to construct accurate structural static model construction. The uncertainties in building 3-D reservoir models are a real challenge for such micro to nano pore scale structure. Based on data from 24 wells distributed throughout the Sadi tight formation. An application of building a 3-D static model for a tight limestone oil reservoir in Iraq is presented in this study. The most common uncertainties confronted while building the model were illustrated. Such as accurate estimations of cut-off permeability and porosity values. These values directly affect the calculation of net pay thickness for each layer in the reservoir and consequently affect the target of estimating reservoir initial oil in place (IOIP). Also, the main challenge to the static modeling of such reservoirs is dealing with tight reservoir characteristics which cause major reservoir heterogeneity and complexities that are problematic to the process of modeling reservoir simulation. Twenty seven porosity and permeability measurements from Sadi/Tanuma reservoir were used to validate log interpretation data for model construction. The results of the history matching process of the constructed dynamic model is also presented in this paper, including data related to oil production, reservoir pressure, and well flowing pressure due to available production.


SPE Journal ◽  
2010 ◽  
Vol 15 (04) ◽  
pp. 1062-1076 ◽  
Author(s):  
A.. Seiler ◽  
S.I.. I. Aanonsen ◽  
G.. Evensen ◽  
J.C.. C. Rivenæs

Summary Although typically large uncertainties are associated with reservoir structure, the reservoir geometry is usually fixed to a single interpretation in history-matching workflows, and focus is on the estimation of geological properties such as facies location, porosity, and permeability fields. Structural uncertainties can have significant effects on the bulk reservoir volume, well planning, and predictions of future production. In this paper, we consider an integrated reservoir-characterization workflow for structural-uncertainty assessment and continuous updating of the structural reservoir model by assimilation of production data. We address some of the challenges linked to structural-surface updating with the ensemble Kalman filter (EnKF). An ensemble of reservoir models, expressing explicitly the uncertainty resulting from seismic interpretation and time-to-depth conversion, is created. The top and bottom reservoir-horizon uncertainties are considered as a parameter for assisted history matching and are updated by sequential assimilation of production data using the EnKF. To avoid modifications in the grid architecture and thus to ensure a fixed dimension of the state vector, an elastic-grid approach is proposed. The geometry of a base-case simulation grid is deformed to match the realizations of the top and bottom reservoir horizons. The method is applied to a synthetic example, and promising results are obtained. The result is an ensemble of history-matched structural models with reduced and quantified uncertainty. The updated ensemble of structures provides a more reliable characterization of the reservoir architecture and a better estimate of the field oil in place.


SPE Journal ◽  
2020 ◽  
Vol 25 (04) ◽  
pp. 2055-2066
Author(s):  
Sarath Pavan Ketineni ◽  
Subhash Kalla ◽  
Shauna Oppert ◽  
Travis Billiter

Summary Standard history-matching workflows use qualitative 4D seismic observations to assist in reservoir modeling and simulation. However, such workflows lack a robust framework for quantitatively integrating 4D seismic interpretations. 4D seismic or time-lapse-seismic interpretations provide valuable interwell saturation and pressure information, and quantitatively integrating this interwell data can help to constrain simulation parameters and improve the reliability of production modeling. In this paper, we outline technologies aimed at leveraging the value of 4D for reducing uncertainty in the range of history-matched models and improving the production forecast. The proposed 4D assisted-history-match (4DAHM) workflows use interpretations of 4D seismic anomalies for improving the reservoir-simulation models. Design of experiments is initially used to generate the probabilistic history-match simulations by varying the range of uncertain parameters (Schmidt and Launsby 1989; Montgomery 2017). Saturation maps are extracted from the production-history-matched (PHM) simulations and then compared with 4D predicted swept anomalies. An automated extraction method was created and is used to reconcile spatial sampling differences between 4D data and simulation output. Interpreted 4D data are compared with simulation output, and the mismatch generated is used as a 4D filter to refine the suite of reservoir-simulation models. The selected models are used to identify reservoir-simulation parameters that are sensitive for generating a good match. The application of 4DAHM workflows has resulted in reduced uncertainty in volumetric predictions of oil fields, probabilistic saturation S-curves at target locations, and fundamental changes to the dynamic model needed to improve the match to production data. Results from adopting this workflow in two different deepwater reservoirs are discussed. They not only resulted in reduced uncertainty, but also provided information on key performance indicators that are critical in obtaining a robust history match. In the first case study presented, the deepwater oilfield 4DAHM resulted in a reduction of uncertainty by 20% of original oil in place (OOIP) and by 25% in estimated ultimate recoverable (EUR) oil in the P90 to P10 range estimates. In the second case study, 4DAHM workflow exploited discrepancies between 4D seismic and simulation data to identify features necessary to be included in the dynamic model. Connectivity was increased through newly interpreted interchannel erosional contacts, as well as subseismic faults. Moreover, the workflow provided an improved drilling location, which has the higher probability of tapping unswept oil and better EUR. The 4D filters constrained the suite of reservoir-simulation models and helped to identify four of 24 simulation parameters critical for success. The updated PHM models honor both the production data and 4D interpretations, resulting in reduced uncertainty across the S-curve and, in this case, an increased P50 OOIP of 24% for a proposed infill drilling location, plus a significant cycle-time savings.


Sign in / Sign up

Export Citation Format

Share Document