Estimation of Initial Hydrocarbon Saturation Applying Machine Learning Under Petrophysical Uncertainty

2021 ◽  
pp. 1-16
Author(s):  
Pascale Neff ◽  
Dominik Steineder ◽  
Barbara Stummer ◽  
Torsten Clemens

Summary The initial hydrocarbon saturation has a major effect on field-development planning and resource estimation. However, the bases of the initial hydrocarbon saturation are indirect measurements from spatially distributed wells applying saturation-height modeling using uncertain parameters. Because of the multitude of parameters, applying assisted-matching methods requires trade-offs regarding the quality of objective functions used for the various observed data. Applying machine learning (ML) in a Bayesian framework helps overcome these challenges. In the present study, the methodology is used to derive posterior parameter distributions for saturation-height modeling honoring the petrophysical uncertainty in a field. The results are used for dynamic model initialization and will be applied for forecasting under uncertainty. To determine the dynamic numerical model initial hydrocarbon saturation, the saturation-height model (SHM) needs to be conditioned to the petrophysically interpreted logs. There were 2,500 geological realizations generated to cover the interpreted ranges of porosity, permeability, and saturations for 15 wells. For the SHM, 12 parameters and their ranges were introduced. Latin hypercube sampling was used to generate a training set for ML models using the random forest algorithm. The trained ML models were conditioned to the petrophysical log-derived saturation data. To ensure a fieldwide consistency of the dynamic numerical models, only parameter combinations honoring the interpreted saturation range for all wells were selected. The presented method allows for consistent initialization and for rejection of parameters that do not fit the observed data. In our case study, the most-significant observation concerns the posterior parameter-distribution ranges, which are narrowed down dramatically, such as the free-water-level (FWL) range, which is reduced from 645–670 m subsea level (mSS) to 656–668 mSS. Furthermore, the SHM parameters are proved independent; thus, the resulting posterior parameter ranges for the SHM can be used for conditioning production data to models and subsequent hydrocarbon-production forecasting. Additional observations can be made from the ML results, such as the correlation between wells; this allows for interpreting groups of wells that have a similar behavior, favor the same combinations, and potentially belong to the same compartment.

2020 ◽  
Vol 52 (1) ◽  
pp. 447-453 ◽  
Author(s):  
I. Robertson

AbstractThe Erskine high-pressure–high-temperature gas condensate field was the first such field developed in the UK Continental Shelf. Since production started in 1997, the field has produced over 350 bcf of gas and 70 MMbbl of condensate. The reservoir pressure has depleted from an initial pressure of 960 bar (13 920 psi) down to 140–400 bar (2030–5800 psi), resulting in some compaction and sand production in some of the wells. Free water production has led to the formation of wellbore scale, which has required interventions to remove.The reservoirs are sandstones of the Jurassic Puffin, Pentland and Heather formations. Estimates of hydrocarbons in place made using production and pressure data compare favourably with the initial estimates made during field development planning, although the Pentland Formation volume is some 20% below the sanction estimate.Several major field outages have occurred, such as a condensate fire in 2010 and a blockage of the multiphase export pipeline in 2007. In addition, the field has experienced flow assurance problems related to scale and wax deposition. A new pipeline section was installed in 2018 to bypass a full pipeline blockage which occurred due to wax deposition.


2019 ◽  
Author(s):  
Kasper Van Mens ◽  
Joran Lokkerbol ◽  
Richard Janssen ◽  
Robert de Lange ◽  
Bea Tiemens

BACKGROUND It remains a challenge to predict which treatment will work for which patient in mental healthcare. OBJECTIVE In this study we compare machine algorithms to predict during treatment which patients will not benefit from brief mental health treatment and present trade-offs that must be considered before an algorithm can be used in clinical practice. METHODS Using an anonymized dataset containing routine outcome monitoring data from a mental healthcare organization in the Netherlands (n = 2,655), we applied three machine learning algorithms to predict treatment outcome. The algorithms were internally validated with cross-validation on a training sample (n = 1,860) and externally validated on an unseen test sample (n = 795). RESULTS The performance of the three algorithms did not significantly differ on the test set. With a default classification cut-off at 0.5 predicted probability, the extreme gradient boosting algorithm showed the highest positive predictive value (ppv) of 0.71(0.61 – 0.77) with a sensitivity of 0.35 (0.29 – 0.41) and area under the curve of 0.78. A trade-off can be made between ppv and sensitivity by choosing different cut-off probabilities. With a cut-off at 0.63, the ppv increased to 0.87 and the sensitivity dropped to 0.17. With a cut-off of at 0.38, the ppv decreased to 0.61 and the sensitivity increased to 0.57. CONCLUSIONS Machine learning can be used to predict treatment outcomes based on routine monitoring data.This allows practitioners to choose their own trade-off between being selective and more certain versus inclusive and less certain.


2021 ◽  
Vol 23 (1) ◽  
pp. 32-41
Author(s):  
Pieter Delobelle ◽  
Paul Temple ◽  
Gilles Perrouin ◽  
Benoit Frénay ◽  
Patrick Heymans ◽  
...  

Machine learning is being integrated into a growing number of critical systems with far-reaching impacts on society. Unexpected behaviour and unfair decision processes are coming under increasing scrutiny due to this widespread use and its theoretical considerations. Individuals, as well as organisations, notice, test, and criticize unfair results to hold model designers and deployers accountable. We offer a framework that assists these groups in mitigating unfair representations stemming from the training datasets. Our framework relies on two inter-operating adversaries to improve fairness. First, a model is trained with the goal of preventing the guessing of protected attributes' values while limiting utility losses. This first step optimizes the model's parameters for fairness. Second, the framework leverages evasion attacks from adversarial machine learning to generate new examples that will be misclassified. These new examples are then used to retrain and improve the model in the first step. These two steps are iteratively applied until a significant improvement in fairness is obtained. We evaluated our framework on well-studied datasets in the fairness literature - including COMPAS - where it can surpass other approaches concerning demographic parity, equality of opportunity and also the model's utility. We investigated the trade-offs between these targets in terms of model hyperparameters and also illustrated our findings on the subtle difficulties when mitigating unfairness and highlight how our framework can assist model designers.


Energies ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 1055
Author(s):  
Qian Sun ◽  
William Ampomah ◽  
Junyu You ◽  
Martha Cather ◽  
Robert Balch

Machine-learning technologies have exhibited robust competences in solving many petroleum engineering problems. The accurate predictivity and fast computational speed enable a large volume of time-consuming engineering processes such as history-matching and field development optimization. The Southwest Regional Partnership on Carbon Sequestration (SWP) project desires rigorous history-matching and multi-objective optimization processes, which fits the superiorities of the machine-learning approaches. Although the machine-learning proxy models are trained and validated before imposing to solve practical problems, the error margin would essentially introduce uncertainties to the results. In this paper, a hybrid numerical machine-learning workflow solving various optimization problems is presented. By coupling the expert machine-learning proxies with a global optimizer, the workflow successfully solves the history-matching and CO2 water alternative gas (WAG) design problem with low computational overheads. The history-matching work considers the heterogeneities of multiphase relative characteristics, and the CO2-WAG injection design takes multiple techno-economic objective functions into accounts. This work trained an expert response surface, a support vector machine, and a multi-layer neural network as proxy models to effectively learn the high-dimensional nonlinear data structure. The proposed workflow suggests revisiting the high-fidelity numerical simulator for validation purposes. The experience gained from this work would provide valuable guiding insights to similar CO2 enhanced oil recovery (EOR) projects.


Author(s):  
Atheer Dheyauldeen ◽  
Omar Al-Fatlawi ◽  
Md Mofazzal Hossain

AbstractThe main role of infill drilling is either adding incremental reserves to the already existing one by intersecting newly undrained (virgin) regions or accelerating the production from currently depleted areas. Accelerating reserves from increasing drainage in tight formations can be beneficial considering the time value of money and the cost of additional wells. However, the maximum benefit can be realized when infill wells produce mostly incremental recoveries (recoveries from virgin formations). Therefore, the prediction of incremental and accelerated recovery is crucial in field development planning as it helps in the optimization of infill wells with the assurance of long-term economic sustainability of the project. Several approaches are presented in literatures to determine incremental and acceleration recovery and areas for infill drilling. However, the majority of these methods require huge and expensive data; and very time-consuming simulation studies. In this study, two qualitative techniques are proposed for the estimation of incremental and accelerated recovery based upon readily available production data. In the first technique, acceleration and incremental recovery, and thus infill drilling, are predicted from the trend of the cumulative production (Gp) versus square root time function. This approach is more applicable for tight formations considering the long period of transient linear flow. The second technique is based on multi-well Blasingame type curves analysis. This technique appears to best be applied when the production of parent wells reaches the boundary dominated flow (BDF) region before the production start of the successive infill wells. These techniques are important in field development planning as the flow regimes in tight formations change gradually from transient flow (early times) to BDF (late times) as the production continues. Despite different approaches/methods, the field case studies demonstrate that the accurate framework for strategic well planning including prediction of optimum well location is very critical, especially for the realization of the commercial benefit (i.e., increasing and accelerating of reserve or assets) from infilled drilling campaign. Also, the proposed framework and findings of this study provide new insight into infilled drilling campaigns including the importance of better evaluation of infill drilling performance in tight formations, which eventually assist on informed decisions process regarding future development plans.


2017 ◽  
Vol 75 (1) ◽  
pp. 30-42 ◽  
Author(s):  
Louis Legendre ◽  
Richard B Rivkin ◽  
Nianzhi Jiao

Abstract This “Food for Thought” article examines the potential uses of several novel scientific and technological developments, which are currently available or being developed, to significantly advance or supplement existing experimental approaches to study water-column biogeochemical processes (WCB-processes). After examining the complementary roles of observation, experiments and numerical models to study WCB-processes, we focus on the main experimental approaches of free-water in situ experiments, and at-sea and on-land meso- and macrocosms. We identify some of the incompletely resolved aspects of marine WCB-processes, and explore advanced experimental approaches that could be used to reduce their uncertainties. We examine three such approaches: free-water experiments of lengthened duration using bioArgo floats and gliders, at-sea mesocosms deployed several 100s m below the sea-surface using new biogeochemical sensors, and 50 m-tall on-land macrocosms. These approaches could lead to significant progress in concepts related to marine WCB-processes.


2021 ◽  
pp. 1-18
Author(s):  
Gisela Vanegas ◽  
John Nejedlik ◽  
Pascale Neff ◽  
Torsten Clemens

Summary Forecasting production from hydrocarbon fields is challenging because of the large number of uncertain model parameters and the multitude of observed data that are measured. The large number of model parameters leads to uncertainty in the production forecast from hydrocarbon fields. Changing operating conditions [e.g., implementation of improved oil recovery or enhanced oil recovery (EOR)] results in model parameters becoming sensitive in the forecast that were not sensitive during the production history. Hence, simulation approaches need to be able to address uncertainty in model parameters as well as conditioning numerical models to a multitude of different observed data. Sampling from distributions of various geological and dynamic parameters allows for the generation of an ensemble of numerical models that could be falsified using principal-component analysis (PCA) for different observed data. If the numerical models are not falsified, machine-learning (ML) approaches can be used to generate a large set of parameter combinations that can be conditioned to the different observed data. The data conditioning is followed by a final step ensuring that parameter interactions are covered. The methodology was applied to a sandstone oil reservoir with more than 70 years of production history containing dozens of wells. The resulting ensemble of numerical models is conditioned to all observed data. Furthermore, the resulting posterior-model parameter distributions are only modified from the prior-model parameter distributions if the observed data are informative for the model parameters. Hence, changes in operating conditions can be forecast under uncertainty, which is essential if nonsensitive parameters in the history are sensitive in the forecast.


2021 ◽  
Author(s):  
Subba Ramarao Rachapudi Venkata ◽  
Nagaraju Reddicharla ◽  
Shamma Saeed Alshehhi ◽  
Indra Utama ◽  
Saber Mubarak Al Nuimi ◽  
...  

Abstract Matured hydrocarbon fields are continuously deteriorating and selection of well interventions turn into critical task with an objective of achieving higher business value. Time consuming simulation models and classical decision-making approach making it difficult to rapidly identify the best underperforming, potential rig and rig-less candidates. Therefore, the objective of this paper is to demonstrate the automated solution with data driven machine learning (ML) & AI assisted workflows to prioritize the intervention opportunities that can deliver higher sustainable oil rate and profitability. The solution consists of establishing a customized database using inputs from various sources including production & completion data, flat files and simulation models. Automation of Data gathering along with technical and economical calculations were implemented to overcome the repetitive and less added value tasks. Second layer of solution includes configuration of tailor-made workflows to conduct the analysis of well performance, logs, output from simulation models (static reservoir model, well models) along with historical events. Further these workflows were combination of current best practices of an integrated assessment of subsurface opportunities through analytical computations along with machine learning driven techniques for ranking the well intervention opportunities with consideration of complexity in implementation. The automated process outcome is a comprehensive list of future well intervention candidates like well conversion to gas lift, water shutoff, stimulation and nitrogen kick-off opportunities. The opportunity ranking is completed with AI assisted supported scoring system that takes input from technical, financial and implementation risk scores. In addition, intuitive dashboards are built and tailored with the involvement of management and engineering departments to track the opportunity maturation process. The advisory system has been implemented and tested in a giant mature field with over 300 wells. The solution identified more techno-economical feasible opportunities within hours instead of weeks or months with reduced risk of failure resulting into an improved economic success rate. The first set of opportunities under implementation and expected a gain of 2.5MM$ with in first one year and expected to have reoccurring gains in subsequent years. The ranked opportunities are incorporated into the business plan, RMP plans and drilling & workover schedule in accordance to field development targets. This advisory system helps in maximizing the profitability and minimizing CAPEX and OPEX. This further maximizes utilization of production optimization models by 30%. Currently the system was implemented in one of ADNOC Onshore field and expected to be scaled to other fields based on consistent value creation. A hybrid approach of physics and machine learning based solution led to the development of automated workflows to identify and rank the inactive strings, well conversion to gas lift candidates & underperforming candidates resulting into successful cost optimization and production gain.


Sign in / Sign up

Export Citation Format

Share Document