Bridging the Gap Between Material Balance and Reservoir Simulation for History Matching and Probabilistic Forecasting Using Machine Learning

2021 ◽  
Author(s):  
Nigel H. Goodwin

Abstract Objectives/Scope Methods for efficient probabilistic history matching and forecasting have been available for complex reservoir studies for nearly 20 years. These require a surprisingly small number of reservoir simulation runs (typically less than 200). Nowadays, the bottleneck for reservoir decision support is building and maintaining a reservoir simulation model. This paper describes an approach which does not require a reservoir simulation model, is data driven, and includes a physics model based on material balance. It can be useful where a full simulation model is not economically justified, or where rapid decisions need to be made. Methods, Procedures, Process Previous work has described the use of proxy models and Hamiltonian Markov Chain Monte Carlo to produce valid probabilistic forecasts. To generate a data driven model, we take historical measurements of rates and pressures at each well, and apply multi-variate time series to generate a set of differential-algebraic equations (DAE) which can be integrated over time using a fully implicit solver. We combine the time series models with material balance equations, including a simple PVT and Z factor model. The parameters are adjusted in a fully Bayesian manner to generate an ensemble of models and a probabilistic forecast. The use of a DAE distinguishes the approach from normal time-series analysis, where an ARIMA model or state space model is used, and is normally only reliable for short term forecasting. Results, Observations, Conclusions We apply these techniques to the Volve reservoir model, and obtain a good history match. Moreover, the effort to build a reservoir model has been removed. We demonstrate the feasibility of simple physics models, and open up the possibility of combinations of physics models and machine learning models, so that the most appropriate approach can be used depending on resources and reservoir complexity. We have bridged the gap between pure machine learning models and full reservoir simulation. Novel/Additive Information The approach to use multi-variate time series analysis to generate a set of ordinary differential equations is novel. The extension of previously described probabilistic forecasting to a generalised model has many possible applications within and outside the oil and gas industry, and is not restricted to reservoir simulation.

2021 ◽  
Vol 73 (07) ◽  
pp. 44-45
Author(s):  
Chris Carpenter

This article, written by JPT Technology Editor Chris Carpenter, contains highlights of paper SPE 201693, “Subsurface Analytics Case Study: Reservoir Simulation and Modeling of a Highly Complex Offshore Field in Malaysia Using Artificial Intelligence and Machine Learning,” by Rahim Masoudi, SPE, Petronas; Shahab D. Mohaghegh, SPE, West Virginia University; and Daniel Yingling, Intelligent Solutions, et al., prepared for the 2020 SPE Annual Technical Conference and Exhibition, originally scheduled to be held in Denver, 5–7 October. The paper has not been peer reviewed. Using commercial numerical reservoir simulators to build a full-field reservoir model and simultaneously history matching multiple dynamic variables for a highly complex offshore mature field in Malaysia had proved challenging. In the complete paper, the authors demonstrate how artificial intelligence (AI) and machine learning can be used to build a purely data-driven reservoir simulation model that successfully history matches all dynamic variables for wells in this field and subsequently can be used for production forecasting. This synopsis concentrates on the process used, while the complete paper provides results of the fully automated history matching. Subsurface Analytics In the presented technique, which the authors call subsurface analytics, data-driven pattern-recognition technologies are used to embed the physics of the fluid flow through porous media and to create a model through discovering the best, most-appropriate relationships between all measured data in each reservoir. This is an alternative to starting with the construction of mathematical equations to model the physics of the fluid flow through porous media, followed by modification of geological models in order to achieve history match. The key characteristics of subsurface analytics are that no interpretations, assumptions, or complex initial geological models (and thus no upscaling) exist. Furthermore, the main series of dynamic variables used to build this model is measured on the surface, while other major static, and sometimes even dynamic, characteristics are based on subsurface measurements, thereby making this approach a combination of reservoir and wellbore-simulation models rather than merely a reservoir model. The history-matching process of the subsurface analytics process is completely automated. Top-Down Modeling (TDM) TDM is a data-driven reservoir modeling approach under the realm of subsurface analytics technology that uses AI and machine learning to develop full-field reservoir models based on measurements rather than solutions of governing equations. TDM integrates all available field measurements into a full-field reservoir model and matches the historical production of all individual wells in a mature field with a single AI-based model. The model is validated through blind history matching. The approach then can forecast a field’s behavior on a well-by-well basis. TDM is a data-driven approach; thus, the quality assurance/quality control (QA/QC) of the data input is para-mount before embarking on the modeling process to ensure that the artificial neural network (ANN) is taught properly with reliable training of the data set. This includes the understanding of data availability and magnitude, analysis of well-by-well production performance trends, and identification of data anomalies.


2021 ◽  
Author(s):  
Bjørn Egil Ludvigsen ◽  
Mohan Sharma

Abstract Well performance calibration after history matching a reservoir simulation model ensures that the wells give realistic rates during the prediction phase. The calibration involves adjusting well model parameters to match observed production rates at specified backpressure(s). This process is usually very time consuming such that the traditional approaches using one reservoir model with hundreds of high productivity wells would take months to calibrate. The application of uncertainty-centric workflows for reservoir modeling and history matching results in many acceptable matches for phase rates and flowing bottom-hole pressure (BHP). This makes well calibration even more challenging for an ensemble of large number of simulation models, as the existing approaches are not scalable. It is known that Productivity Index (PI) integrates reservoir and well performance where most of the pressure drop happens in one to two grid blocks around well depending upon the model resolution. A workflow has been setup to fix transition by calibrating PI for each well in a history matched simulation model. Simulation PI can be modified by changing permeability-thickness (Kh), skin, or by applying PI multiplier as a correction. For a history matched ensemble with a range in water-cut and gas-oil ratio, the proposed workflow involves running flowing gradient calculations for a well corresponding to observed THP and simulated rates for different phases to calculate target BHP. A PI Multiplier is then calculated for that well and model that would shift simulation BHP to target BHP as local update to reduce the extent of jump. An ensemble of history matched models with a range in water-cut and gas-oil ratio have a variation in required BHPs unique to each case. With the well calibration performed correctly, the jump observed in rates while switching from history to prediction can be eliminated or significantly reduced. The prediction thus results in reliable rates if wells are run on pressure control and reliable plateau if the wells are run on group control. This reduces the risk of under/over-predicting ultimate hydrocarbon recovery from field and the project's cashflow. Also, this allows running sensitivities to backpressure, tubing design, and other equipment constraints to optimize reservoir performance and facilities design. The proposed workflow, which dynamically couple reservoir simulation and well performance modeling, takes a few seconds to run for a well, making it fit-for-purpose for a large ensemble of simulation models with a large number of wells.


2021 ◽  
Author(s):  
Mohamed Shams

Abstract This paper provides the field application of the bee colony optimization algorithm in assisting the history match of a real reservoir simulation model. Bee colony optimization algorithm is an optimization technique inspired by the natural optimization behavior shown by honeybees during searching for food. The way that honeybees search for food sources in the vicinity of their nest inspired computer science researchers to utilize and apply same principles to create optimization models and techniques. In this work the bee colony optimization mechanism is used as the optimization algorithm in the assisted the history matching workflow applied to a reservoir simulation model of WD-X field producing since 2004. The resultant history matched model is compared with with those obtained using one the most widely applied commercial AHM software tool. The results of this work indicate that using the bee colony algorithm as the optimization technique in the assisted history matching workflow provides noticeable enhancement in terms of match quality and time required to achieve a reasonable match.


2018 ◽  
Vol 6 (3) ◽  
pp. T601-T611
Author(s):  
Juliana Maia Carvalho dos Santos ◽  
Alessandra Davolio ◽  
Denis Jose Schiozer ◽  
Colin MacBeth

Time-lapse (or 4D) seismic attributes are extensively used as inputs to history matching workflows. However, this integration can potentially bring problems if performed incorrectly. Some of the uncertainties regarding seismic acquisition, processing, and interpretation can be inadvertently incorporated into the reservoir simulation model yielding an erroneous production forecast. Very often, the information provided by 4D seismic can be noisy or ambiguous. For this reason, it is necessary to estimate the level of confidence on the data prior to its transfer to the simulation model process. The methodology presented in this paper aims to diagnose which information from 4D seismic that we are confident enough to include in the model. Two passes of seismic interpretation are proposed: the first, intended to understand the character and quality of the seismic data and, the second, to compare the simulation-to-seismic synthetic response with the observed seismic signal. The methodology is applied to the Norne field benchmark case in which we find several examples of inconsistencies between the synthetic and real responses and we evaluate whether these are caused by a simulation model inaccuracy or by uncertainties in the actual observed seismic. After a careful qualitative and semiquantitative analysis, the confidence level of the interpretation is determined. Simulation model updates can be suggested according to the outcome from this analysis. The main contribution of this work is to introduce a diagnostic step that classifies the seismic interpretation reliability considering the uncertainties inherent in these data. The results indicate that a medium to high interpretation confidence can be achieved even for poorly repeated data.


Fluids ◽  
2019 ◽  
Vol 4 (3) ◽  
pp. 126 ◽  
Author(s):  
Shohreh Amini ◽  
Shahab Mohaghegh

Reservoir simulation models are the major tools for studying fluid flow behavior in hydrocarbon reservoirs. These models are constructed based on geological models, which are developed by integrating data from geology, geophysics, and petro-physics. As the complexity of a reservoir simulation model increases, so does the computation time. Therefore, to perform any comprehensive study which involves thousands of simulation runs, a very long period of time is required. Several efforts have been made to develop proxy models that can be used as a substitute for complex reservoir simulation models. These proxy models aim at generating the outputs of the numerical fluid flow models in a very short period of time. This research is focused on developing a proxy fluid flow model using artificial intelligence and machine learning techniques. In this work, the proxy model is developed for a real case CO2 sequestration project in which the objective is to evaluate the dynamic reservoir parameters (pressure, saturation, and CO2 mole fraction) under various CO2 injection scenarios. The data-driven model that is developed is able to generate pressure, saturation, and CO2 mole fraction throughout the reservoir with significantly less computational effort and considerably shorter period of time compared to the numerical reservoir simulation model.


Sign in / Sign up

Export Citation Format

Share Document