Closing the Loop on a History Match for a Permian EOR Field Using Relative Permeability Data Uncertainty

2021 ◽  
Author(s):  
Usman Aslam ◽  
Jorge Burgos ◽  
Craig Williams ◽  
Shawn McCloskey ◽  
James Cooper ◽  
...  

Abstract Reservoir production forecasts are inherently uncertain due to the lack of quality data available to build predictive reservoir models. Multiple data types, including historical production, well tests (RFT/PLT), and time-lapse seismic data, are assimilated into reservoir models during the history matching process to improve predictability of the model. Traditionally, a ‘best estimate’ for relative permeability data is assumed during the history matching process, despite there being significant uncertainty in the relative permeability. Relative permeability governs multiphase flow in the reservoir; therefore, it has significant importance in understanding the reservoir behavior as well as for model calibration and hence for reliable production forecasts. Performing sensitivities around the ‘best estimate’ relative permeability case will cover only part of the uncertainty space, with no indication of the confidence that may be placed on these forecasts. In this paper, we present an application of a Bayesian framework for uncertainty assessment and efficient history matching of a Permian CO2 EOR field for reliable production forecast. The study field has complex geology with over 65 years of historical data from primary recovery, waterflood, and CO2 injection. Relative permeability data from the field showed significant uncertainty, so we used uncertainties in the saturation endpoints as well as in the curvature of the relative permeability in multiple zones, by employing generalized Corey functions for relative permeability parameterization. Uncertainty in the relative permeability is used through a common platform integrator. An automated workflow generates the first set of relative permeability curves sampled from the prior distribution of saturation endpoints and Corey exponents, called ‘scoping runs’. These relative permeability curves are then passed to the reservoir simulator. The assumptions of uncertainties in the relative permeability data and other dynamic parameters are quickly validated by comparing the scoping runs and historical observations. By creating a mismatch or likelihood function, the Bayesian framework generates an ensemble of history matched models calibrated to the production data which can then be used for reliable probabilistic forecasting. Several iterations during the manual history match did not yield an acceptable solution, as uncertainty in the relative permeability was ignored. An application of the Bayesian inference accelerated by a proxy model found the relative permeability data to be one of the most influential parameters during the assisted history matching exercise. Incorporating the uncertainty in relative permeability data along with other dynamic parameters not only helped speed up the model calibration process, but also led to the identification of multiple history matched models. In addition, results show that the use of the Bayesian framework significantly reduced uncertainty in the most important dynamic parameters. The proposed approach allows incorporating previously ignored uncertainty in the relative permeability data in a systematic manner. The user-defined mismatch function increases the likelihood of obtaining an acceptable match and the weights in the mismatch function allow both the measurement uncertainty and the effect of simulation model inaccuracies. The Bayesian framework considers the whole uncertainty space and not just the history match region, leading to the identification of multiple history matched models.

2021 ◽  
Author(s):  
Ali Al-Turki ◽  
Obai Alnajjar ◽  
Majdi Baddourah ◽  
Babatunde Moriwawon

Abstract The algorithms and workflows have been developed to couple efficient model parameterization with stochastic, global optimization using a Multi-Objective Genetic Algorithm (MOGA) for global history matching, and coupled with an advanced workflow for streamline sensitivity-based inversion for fine-tuning. During parameterization the low-rank subsets of most influencing reservoir parameters are identified and propagated to MOGA to perform the field-level history match. Data misfits between the field historical data and simulation data are calculated with multiple realizations of reservoir models that quantify and capture reservoir uncertainty. Each generation of the optimization algorithms reduces the data misfit relative to the previous iteration. This iterative process continues until a satisfactory field-level history match is reached or there are no further improvements. The fine-tuning process of well-connectivity calibration is then performed with a streamlined sensitivity-based inversion algorithm to locally update the model to reduce well-level mismatch. In this study, an application of the proposed algorithms and workflow is demonstrated for model calibration and history matching. The synthetic reservoir model used in this study is discretized into millions of grid cells with hundreds of producer and injector wells. It is designed to generate several decades of production and injection history to evaluate and demonstrate the workflow. In field-level history matching, reservoir rock properties (e.g., permeability, fault transmissibility, etc.) are parameterized to conduct the global match of pressure and production rates. Grid Connectivity Transform (GCT) was used and assessed to parameterize the reservoir properties. In addition, the convergence rate and history match quality of MOGA was assessed during the field (global) history matching. Also, the effectiveness of the streamline-based inversion was evaluated by quantifying the additional improvement in history matching quality per well. The developed parametrization and optimization algorithms and workflows revealed the unique features of each of the algorithms for model calibration and history matching. This integrated workflow has successfully defined and carried uncertainty throughout the history matching process. Following the successful field-level history match, the well-level history matching was conducted using streamline sensitivity-based inversion, which further improved the history match quality and conditioned the model to historical production and injection data. In general, the workflow results in enhanced history match quality in a shorter turnaround time. The geological realism of the model is retained for robust prediction and development planning.


2021 ◽  
Author(s):  
Carlos Esteban Alfonso ◽  
Frédérique Fournier ◽  
Victor Alcobia

Abstract The determination of the petrophysical rock-types often lacks the inclusion of measured multiphase flow properties as the relative permeability curves. This is either the consequence of a limited number of SCAL relative permeability experiments, or due to the difficulty of linking the relative permeability characteristics to standard rock-types stemming from porosity, permeability and capillary pressure. However, as soon as the number of relative permeability curves is significant, they can be processed under the machine learning methodology stated by this paper. The process leads to an automatic definition of relative permeability based rock-types, from a precise and objective characterization of the curve shapes, which would not be achieved with a manual process. It improves the characterization of petrophysical rock-types, prior to their use in static and dynamic modeling. The machine learning approach analyzes the shapes of curves for their automatic classification. It develops a pattern recognition process combining the use of principal component analysis with a non-supervised clustering scheme. Before this, the set of relative permeability curves are pre-processed (normalization with the integration of irreducible water and residual oil saturations for the SCAL relative permeability samples from an imbibition experiment) and integrated under fractional flow curves. Fractional flow curves proved to be an effective way to unify the relative permeability of the two fluid phases, in a unique curve that characterizes the specific poral efficiency displacement of this rock sample. The methodology has been tested in a real data set from a carbonate reservoir having a significant number of relative permeability curves available for the study, in addition to capillary pressure, porosity and permeability data. The results evidenced the successful grouping of the relative permeability samples, according to their fractional flow curves, which allowed the classification of the rocks from poor to best displacement efficiency. This demonstrates the feasibility of the machine learning process for defining automatically rock-types from relative permeability data. The fractional flow rock-types were compared to rock-types obtained from capillary pressure analysis. The results indicated a lack of correspondence between the two series of rock-types, which testifies the additional information brought by the relative permeability data in a rock-typing study. Our results also expose the importance of having good quality SCAL experiments, with an accurate characterization of the saturation end-points, which are used for the normalization of the curves, and a consistent sampling for both capillary pressure and relative permeability measurements.


2021 ◽  
Author(s):  
M. A. Borregales Reverón ◽  
H. H. Holm ◽  
O. Møyner ◽  
S. Krogstad ◽  
K.-A. Lie

Abstract The Ensemble Smoother with Multiple Data Assimilation (ES-MDA) method has been popular for petroleum reservoir history matching. However, the increasing inclusion of automatic differentiation in reservoir models opens the possibility to history-match models using gradient-based optimization. Here, we discuss, study, and compare ES-MDA and a gradient-based optimization for history-matching waterflooding models. We apply these two methods to history match reduced GPSNet-type models. To study the methods, we use an implementation of ES-MDA and a gradient-based optimization in the open-source MATLAB Reservoir Simulation Toolbox (MRST), and compare the methods in terms of history-matching quality and computational efficiency. We show complementary advantages of both ES-MDA and gradient-based optimization. ES-MDA is suitable when an exact gradient is not available and provides a satisfactory forecast of future production that often envelops the reference history data. On the other hand, gradient-based optimization is efficient if the exact gradient is available, as it then requires a low number of model evaluations. If the exact gradient is not available, using an approximate gradient or ES-MDA are good alternatives and give equivalent results in terms of computational cost and quality predictions.


2019 ◽  
Vol 89 ◽  
pp. 01004
Author(s):  
Dylan Shaw ◽  
Peyman Mostaghimi ◽  
Furqan Hussain ◽  
Ryan T. Armstrong

Due to the poroelasticity of coal, both porosity and permeability change over the life of the field as pore pressure decreases and effective stress increases. The relative permeability also changes as the effective stress regime shifts from one state to another. This paper examines coal relative permeability trends for changes in effective stress. The unsteady-state technique was used to determine experimental relativepermeability curves, which were then corrected for capillary-end effect through history matching. A modified Brooks-Corey correlation was sufficient for generating relative permeability curves and was successfully used to history match the laboratory data. Analysis of the corrected curves indicate that as effective stress increases, gas relative permeability increases, irreducible water saturation increases and the relative permeability cross-point shifts to the right.


2009 ◽  
Vol 12 (03) ◽  
pp. 446-454 ◽  
Author(s):  
Frode Georgsen ◽  
Anne R. Syversveen ◽  
Ragnar Hauge ◽  
Jan I. Tollefsrud ◽  
Morten Fismen

Summary The possibility of updating reservoir models with new well information is important for good reservoir management. The process of drilling a new well through to update of the static model and to history match the new model is often a time-consuming process. This paper presents new algorithms that allow the rapid updating of object-based facies models by further development of already existing models. An existing facies realization is adjusted to match new well observations by changing objects locally or adding/removing objects if required. Parts of the realization that are not influenced by the new wells are not changed. A local update of a specified region of the reservoir can be performed, leaving the rest of the reservoir unchanged or with minimum change because of new wells. In this method, the main focus is the algorithm implemented to fulfill well conditioning. The effect of this algorithm on different object models is presented through several case studies. These studies show how the local update consistently includes new information while leaving the rest of the realization unperturbed, thereby preserving the good history match. Introduction Rapid updating of static and dynamic reservoir models is important for reservoir management. Continual maintenance of history-matched models allows for right-time decisions to optimize the reservoir performance. The process of drilling a new well through to updating of the static model and history matching of the new model is often a time-consuming process. Static reservoir models and history matches are updated only intermittently, and there is typically a 1- to 2-year delay between the drilling of a new well and the generation of a reliable history-matched model that incorporates the new information. This paper presents new algorithms that allow rapid updating of static reservoir models when new wells are drilled. The static-model update is designed to keep as much of the existing history match as possible by locally adjusting the existing static model to the new well data. As the name implies, object models use a set of facies objects to generate a facies realization. Stochastic object-modeling algorithms have been developed to improve the representation of facies architectures in complex heterogeneous reservoirs and, thereby, to obtain more-realistic dynamic behavior of the reservoir models. We consider the main advantages of object models to be the ability to create geologically realistic facies elements (objects) and control the interaction between them, to correlate observations between wells (connectivity) explicitly, and the possibility of applying intraobject petrophysical trends.


SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 464-479 ◽  
Author(s):  
B. Todd Hoffman ◽  
Jef K. Caers ◽  
Xian-Huan Wen ◽  
Sebastien B. Strebelle

Summary This paper presents an innovative methodology to integrate prior geologic information, well-log data, seismic data, and production data into a consistent 3D reservoir model. Furthermore, the method is applied to a real channel reservoir from the African coast. The methodology relies on the probability-perturbation method (PPM). Perturbing probabilities rather than actual petrophysical properties guarantees that the conceptual geologic model is maintained and that any history-matching-related artifacts are avoided. Creating reservoir models that match all types of data are likely to have more prediction power than methods in which some data are not honored. The first part of the paper reviews the details of the PPM, and the next part of this paper describes the additional work that is required to history-match real reservoirs using this method. Then, a geological description of the reservoir case study is provided, and the procedure to build 3D reservoir models that are only conditioned to the static data is covered. Because of the character of the field, the channels are modeled with a multiple-point geostatistical method. The channel locations are perturbed in a manner such that the oil, water, and gas rates from the reservoir more accurately match the rates observed in the field. Two different geologic scenarios are used, and multiple history-matched models are generated for each scenario. The reservoir has been producing for approximately 5 years, but the models are matched only to the first 3 years of production. Afterward, to check predictive power, the matched models are run for the last 1½ years, and the results compare favorably with the field data. Introduction Reservoir models are constructed to better understand reservoir behavior and to better predict reservoir response. Economic decisions are often based on the predictions from reservoir models; therefore, such predictions need to be as accurate as possible. To achieve this goal, the reservoir model should honor all sources of data, including well-log, seismic, geologic information, and dynamic (production rate and pressure) data. Incorporating dynamic data into the reservoir model is generally known as history matching. History matching is difficult because it poses a nonlinear inverse problem in the sense that the relationship between the reservoir model parameters and the dynamic data is highly nonlinear and multiple solutions are avail- able. Therefore, history matching is often done with a trial-and-error method. In real-world applications of history matching, reservoir engineers manually modify an initial model provided by geoscientists until the production data are matched. The initial model is built based on geological and seismic data. While attempts are usually made to honor these other data as much as possible, often the history-matched models are unrealistic from a geological (and geophysical) point of view. For example, permeability is often altered to increase or decrease flow in areas where a mismatch is observed; however, the permeability alterations usually come in the form of box-shaped or pipe-shaped geometries centered around wells or between wells and tend to be devoid of any geologica. considerations. The primary focus lies in obtaining a history match.


SPE Journal ◽  
2020 ◽  
Vol 25 (06) ◽  
pp. 3265-3279
Author(s):  
Hamidreza Hamdi ◽  
Hamid Behmanesh ◽  
Christopher R. Clarkson

Summary Rate-transient analysis (RTA) is a useful reservoir/hydraulic fracture characterization method that can be applied to multifractured horizontal wells (MFHWs) producing from low-permeability (tight) and shale reservoirs. In this paper, we applied a recently developed three-phase RTA technique to the analysis of production data from an MFHW completed in a low-permeability volatile oil reservoir in the Western Canadian Sedimentary Basin. This RTA technique is used to analyze the transient linear flow regime for wells operated under constant flowing bottomhole pressure (BHP) conditions. With this method, the slope of the square-root-of-time plot applied to any of the producing phases can be used to directly calculate the linear flow parameter xfk without defining pseudovariables. The method requires a set of input pressure/volume/temperature (PVT) data and an estimate of two-phase relative permeability curves. For the field case studied herein, the PVT model is constructed by tuning an equation of state (EOS) from a set of PVT experiments, while the relative permeability curves are estimated from numerical model history-matchingresults. The subject well, an MFHW completed in 15 stages, produces oil, water, and gas at a nearly constant (measured downhole) flowing BHP. This well is completed in a low-permeability,near-critical volatile oil system. For this field case, application of the recently proposed RTA method leads to an estimate of xfk that is in close agreement (within 7%) with the results of a numerical model history match performed in parallel. The RTA method also provides pressure–saturation (P–S) relationships for all three phases that are within 2% of those derived from the numerical model. The derived P–S relationships are central to the use of other RTA methods that require calculation of multiphase pseudovariables. The three-phase RTA technique developed herein is a simple-yet-rigorous and accurate alternative to numerical model history matching for estimating xfk when fluid properties and relative permeability data are available.


2018 ◽  
Vol 58 (2) ◽  
pp. 683 ◽  
Author(s):  
Peter Behrenbruch ◽  
Tuan G. Hoang ◽  
Khang D. Bui ◽  
Minh Triet Do Huu ◽  
Tony Kennaird

The Laminaria field, located offshore in the Timor Sea, is one of Australia’s premier oil developments operated for many years by Woodside Energy Ltd. First production was achieved in 1999 using a state-of-the-art floating production storage and offloading vessel, the largest deployed in Australian waters. As is typical, dynamic reservoir simulation was used to predict reservoir performance and forecast production and ultimate recovery. Initial models, using special core analysis (SCAL) laboratory data and pseudos, covered a range of approaches, field and conceptual models. Initial coarser models also used straight-line relative permeability curves. These models were later refined during history matching. The success of simulation studies depends critically on optimal gridding, particularly vertical definition. An objective of the study presented is to demonstrate the importance of optimal and detailed vertical zonation using Routine Core Analysis data and a range of Hydraulic Flow Zone Unit models. In this regard, the performance of a fine-scale model is compared with three alternative, more traditional and coarse models. Secondly the choice of SCAL rock parameters may also have a significant impact, particularly relative permeability. This paper discusses the use of the more recently developed Carman-Kozeny based SCAL models, the Modified Carman-Kozeny Purcell (MCKP) model for capillary pressure and the 2-phase Modified Carman-Kozeny (2p-MCK) model for relative permeability. These models compare favourably with industry standard approaches, the use of Leverett J-functions for capillary pressure and the Modified Brooks-Corey model for relative permeability. The benefit of the MCK-based models is that they have better functionality and far better adherence to actual laboratory data.


2020 ◽  
Author(s):  
Konrad Wojnar ◽  
Jon S?trom ◽  
Tore Felix Munck ◽  
Martha Stunell ◽  
Stig Sviland-Østre ◽  
...  

Abstract The aim of the study was to create an ensemble of equiprobable models that could be used for improving the reservoir management of the Vilje field. Qualitative and quantitative workflows were developed to systematically and efficiently screen, analyze and history match an ensemble of reservoir simulation models to production and 4D seismic data. The goal of developing the workflows is to increase the utilization of data from 4D seismic surveys for reservoir characterization. The qualitative and quantitative workflows are presented, describing their benefits and challenges. The data conditioning produced a set of history matched reservoir models which could be used in the field development decision making process. The proposed workflows allowed for identification of outlying prior and posterior models based on key features where observed data was not covered by the synthetic 4D seismic realizations. As a result, suggestions for a more robust parameterization of the ensemble were made to improve data coverage. The existing history matching workflow efficiently integrated with the quantitative 4D seismic history matching workflow allowing for the conditioning of the reservoir models to production and 4D data. Thus, the predictability of the models was improved. This paper proposes a systematic and efficient workflow using ensemble-based methods to simultaneously screen, analyze and history match production and 4D seismic data. The proposed workflow improves the usability of 4D seismic data for reservoir characterization, and in turn, for the reservoir management and the decision-making processes.


SPE Journal ◽  
2009 ◽  
Vol 15 (02) ◽  
pp. 509-525 ◽  
Author(s):  
Yudou Wang ◽  
Gaoming Li ◽  
Albert C. Reynolds

Summary With the ensemble Kalman filter (EnKF) or smoother (EnKS), it is easy to adjust a wide variety of model parameters by assimilation of dynamic data. We focus first on the case where realizations and estimates of the depths of the initial fluid contacts, as well as grid- block rock-property fields, are generated by matching production data with the EnKS. Then we add the parameters defining power law relative permeability curves to the set of parameters estimated by assimilating production data with EnKS. The efficiency of EnKF and EnKS arises because data are assimilated sequentially in time and so "history matching data" requires only one forward run of the reservoir simulator for each ensemble member. For EnKS and EnKF to yield reliable characterizations of the uncertainty in model parameters and future performance predictions, the updated reservoir-simulation variables (e.g., saturations and pressures) must be statistically consistent with the realizations of these variables that would be obtained by rerunning the simulator from time zero using the updated model parameters. This statistical consistency can be established only under assumptions of Gaussi- anity and linearity that do not normally hold. Here, we use iterative EnKS methods that are statistically consistent, and show that, for the problems considered here, iteration significantly improves the performance of EnKS.


Sign in / Sign up

Export Citation Format

Share Document