Geology-Driven EUR Forecasting in Unconventional Fields

2021 ◽  
Author(s):  
Cenk Temizel ◽  
Celal Hakan Canbaz ◽  
Hasanain Alsaheib ◽  
Kirill Yanidis ◽  
Karthik Balaji ◽  
...  

Abstract EUR (Estimated Ultimate Recovery) forecasting in unconventional fields has been a tough process sourced by its physics involved in the production mechanism of such systems which makes it hard to model or forecast. Machine learning (ML) based EUR prediction becomes very challenging because of the operational issues and the quality of the data in historical production. Geology-driven EUR forecasting, once established, offers EUR forecasting solutions that is not affected by operational issues such as shut-ins. This study illustrates the overall methodology in intelligent fields with real-time data flow and model update that enables optimization of well placement in addition to EUR forecasting for individual wells. A synthetic but realistic model which demonstrates the physics is utilized to generate input data for training the ML model where the spatially-distributed geological parameters including but not limited to porosity, permeability, saturation have been used to describe the production values and ultimately the EUR. The completion is given where the formation characteristics vary in the field that lead to location-dependent production performance leading to well placement optimization based on EUR forecasting from the geological parameters. The algorithm not only predicts the EUR of an individual well and makes decision for the optimum well locations. As the training model includes data of interfering wells, the model is capable of capturing the pattern in the well interference. Even though a synthetic but realistic reservoir model is constructed to generate the data for the aim of assisting the ML model, in practice, it is not an easy task to (1) obtain the input parameters to build a robust reservoir simulation model and (2) understanding and modeling of physics of fluid flow and production in unconventionals is a complex and time-consuming task to build real models. Thus, data-driven approaches like this help to speed up reservoir management and development decisions with reasonable approximations compared to numerical models and solutions. Application of machine learning in intelligent fields is also explained where the models are dynamically-updated and trained with the new data. Geology-driven EUR forecasting has been applied and relatively-new in the industry. In. this study, we are extending it to optimize well placement in intelligent fields in unconventionals beyond other existing studies in the literature.

Animals ◽  
2021 ◽  
Vol 11 (5) ◽  
pp. 1305
Author(s):  
Marco Bovo ◽  
Miki Agrusti ◽  
Stefano Benni ◽  
Daniele Torreggiani ◽  
Patrizia Tassinari

Precision Livestock Farming (PLF) relies on several technological approaches to acquire, in the most efficient way, precise and real-time data concerning production and welfare of individual animals. In this regard, in the dairy sector, PLF devices are being increasingly adopted, automatic milking systems (AMSs) are becoming increasingly widespread, and monitoring systems for animals and environmental conditions are becoming common tools in herd management. As a consequence, a great amount of daily recorded data concerning individual animals are available for the farmers and they could be used effectively for the calibration of numerical models to be used for the prediction of future animal production trends. On the other hand, the machine learning approaches in PLF are nowadays considered an extremely promising solution in the research field of livestock farms and the application of these techniques in the dairy cattle farming would increase sustainability and efficiency of the sector. The study aims to define, train, and test a model developed through machine learning techniques, adopting a Random Forest algorithm, having the main goal to assess the trend in daily milk yield of a single cow in relation to environmental conditions. The model has been calibrated and tested on the data collected on 91 lactating cows of a dairy farm, located in northern Italy, and equipped with an AMS and thermo-hygrometric sensors during the years 2016–2017. In the statistical model, having seven predictor features, the daily milk yield is evaluated as a function of the position of the day in the lactation curve and the indoor barn conditions expressed in terms of daily average of the temperature-humidity index (THI) in the same day and its value in each of the five previous days. In this way, extreme hot conditions inducing heat stress effects can be considered in the yield predictions by the model. The average relative prediction error of the milk yield of each cow is about 18% of daily production, and only 2% of the total milk production.


2021 ◽  
Author(s):  
Hanane Zermane ◽  
Abbes Drardja

Abstract Strengthening production plants and process control functions contribute to a global improvement of manufacturing systems because of their cross-functional characteristics in the industry. Companies established various innovative and operational strategies and there is increasing competitiveness among them and increase companies’ value. Machine Learning (ML) techniques have become an intelligent enticing option to address industrial issues in the current manufacturing sector since the emergence of Industry 4.0, and the extensive integration of paradigms such as big data, cloud computing, high computational power, and enormous storage capacity. Implementing a system that can identify faults early to avoid critical situations in the line production and environment is crucial. Therefore, one of the powerful machine learning algorithms is Random Forest (RF). The ensemble learning algorithm is performed to fault diagnosis and SCADA real-time data classification and predicting the state of the line production. Random Forests proved to be a better classifier with a 95% accuracy. Comparing to the SVM model, the accuracy is 94.18%, however, the K-NN model accuracy is about 93.83%, an accuracy of 80.25% is achieved using the logistic regression model, finally, about 83.73% is obtained by the decision tree model. The excellent experimental results achieved on the Random Forest model showed the merits of this implementation in the production performance, ensuring predictive maintenance, and avoid wasting energy.


2020 ◽  
Vol 7 (Supplement_1) ◽  
pp. S162-S163
Author(s):  
Guillermo Rodriguez-Nava ◽  
Daniela Patricia Trelles-Garcia ◽  
Maria Adriana Yanez-Bello ◽  
Chul Won Chung ◽  
Sana Chaudry ◽  
...  

Abstract Background As the ongoing COVID-19 pandemic develops, there is a need for prediction rules to guide clinical decisions. Previous reports have identified risk factors using statistical inference model. The primary goal of these models is to characterize the relationship between variables and outcomes, not to make predictions. In contrast, the primary purpose of machine learning is obtaining a model that can make repeatable predictions. The objective of this study is to develop decision rules tailored to our patient population to predict ICU admissions and death in patients with COVID-19. Methods We used a de-identified dataset of hospitalized adults with COVID-19 admitted to our community hospital between March 2020 and June 2020. We used a Random Forest algorithm to build the prediction models for ICU admissions and death. Random Forest is one of the most powerful machine learning algorithms; it leverages the power of multiple decision trees, randomly created, for making decisions. Results 313 patients were included; 237 patients were used to train each model, 26 were used for testing, and 50 for validation. A total of 16 variables, selected according to their availability in the Emergency Department, were fit into the models. For the survival model, the combination of age >57 years, the presence of altered mental status, procalcitonin ≥3.0 ng/mL, a respiratory rate >22, and a blood urea nitrogen >32 mg/dL resulted in a decision rule with an accuracy of 98.7% in the training model, 73.1% in the testing model, and 70% in the validation model (Table 1, Figure 1). For the ICU admission model, the combination of age < 82 years, a systolic blood pressure of ≤94 mm Hg, oxygen saturation of ≤93%, a lactate dehydrogenase >591 IU/L, and a lactic acid >1.5 mmol/L resulted in a decision rule with an accuracy of 99.6% in the training model, 80.8% in the testing model, and 82% in the validation model (Table 2, Figure 2). Table 1. Measures of Performance in Predicting Inpatient Mortality Conclusion We created decision rules using machine learning to predict ICU admission or death in patients with COVID-19. Although there are variables previously described with statistical inference, these decision rules are customized to our patient population; furthermore, we can continue to train the models fitting more data with new patients to create even more accurate prediction rules. Figure 1. Receiver Operating Characteristic (ROC) Curve for Inpatient Mortality Table 2. Measures of Performance in Predicting Intensive Care Unit Admission Figure 2. Receiver Operating Characteristic (ROC) Curve for Intensive Care Unit Admission Disclosures All Authors: No reported disclosures


Author(s):  
Atheer Alahmed ◽  
Amal Alrasheedi ◽  
Maha Alharbi ◽  
Norah Alrebdi ◽  
Marwan Aleasa ◽  
...  

2021 ◽  
pp. 1-18
Author(s):  
Gisela Vanegas ◽  
John Nejedlik ◽  
Pascale Neff ◽  
Torsten Clemens

Summary Forecasting production from hydrocarbon fields is challenging because of the large number of uncertain model parameters and the multitude of observed data that are measured. The large number of model parameters leads to uncertainty in the production forecast from hydrocarbon fields. Changing operating conditions [e.g., implementation of improved oil recovery or enhanced oil recovery (EOR)] results in model parameters becoming sensitive in the forecast that were not sensitive during the production history. Hence, simulation approaches need to be able to address uncertainty in model parameters as well as conditioning numerical models to a multitude of different observed data. Sampling from distributions of various geological and dynamic parameters allows for the generation of an ensemble of numerical models that could be falsified using principal-component analysis (PCA) for different observed data. If the numerical models are not falsified, machine-learning (ML) approaches can be used to generate a large set of parameter combinations that can be conditioned to the different observed data. The data conditioning is followed by a final step ensuring that parameter interactions are covered. The methodology was applied to a sandstone oil reservoir with more than 70 years of production history containing dozens of wells. The resulting ensemble of numerical models is conditioned to all observed data. Furthermore, the resulting posterior-model parameter distributions are only modified from the prior-model parameter distributions if the observed data are informative for the model parameters. Hence, changes in operating conditions can be forecast under uncertainty, which is essential if nonsensitive parameters in the history are sensitive in the forecast.


2021 ◽  
Author(s):  
Hamid Pourpak ◽  
Samuel Taubert ◽  
Marios Theodorakopoulos ◽  
Arnaud Lefebvre-Prudencio ◽  
Chay Pointer ◽  
...  

Abstract The Diyab play is an emerging unconventional play in the Middle East. Up to date, reservoir characterization assessments have proved adequate productivity of the play in the United Arab Emirates (UAE). In this paper, an advanced simulation and modeling workflow is presented, which was applied on selected wells located on an appraisal area, by integrating geological, geomechanical, and hydraulic fracturing data. Results will be used to optimize future well landing points, well spacing and completion designs, allowing to enhance the Stimulated Rock Volume (SRV) and its consequent production. A 3D static model was built, by propagating across the appraisal area, all subsurface static properties from core-calibrated petrophysical and geomechanical logs which originate from vertical pilot wells. In addition, a Discrete Fracture Network (DFN) derived from numerous image logs was imported in the model. Afterwards, completion data from one multi-stage hydraulically fracked horizontal well was integrated into the sector model. Simulations of hydraulic fracturing were performed and the sector model was calibrated to the real hydraulic fracturing data. Different scenarios for the fracture height were tested considering uncertainties related to the fracture barriers. This has allowed for a better understanding of the fracture propagation and SRV creation in the reservoir at the main target. In the last step, production resulting from the SRV was simulated and calibrated to the field data. In the end, the calibrated parameters were applied to the newly drilled nearby horizontal wells in the same area, while they were hydraulically fractured with different completion designs and the simulated SRVs of the new wells were then compared with the one calculated on the previous well. Applying a fully-integrated geology, geomechanics, completion and production workflow has helped us to understand the impact of geology, natural fractures, rock mechanical properties and stress regimes in the SRV geometry for the unconventional Diyab play. This work also highlights the importance of data acquisition, reservoir characterization and of SRV simulation calibration processes. This fully integrated workflow will allow for an optimized completion strategy, well landing and spacing for the future horizontal wells. A fully multi-disciplinary simulation workflow was applied to the Diyab unconventional play in onshore UAE. This workflow illustrated the most important parameters impacting the SRV creation and production in the Diyab formation for he studied area. Multiple simulation scenarios and calibration runs showed how sensitive the SRV can be to different parameters and how well placement and fracture jobs can be possibly improved to enhance the SRV creation and ultimately the production performance.


2021 ◽  
Author(s):  
Nagaraju Reddicharla ◽  
Subba Ramarao Rachapudi ◽  
Indra Utama ◽  
Furqan Ahmed Khan ◽  
Prabhker Reddy Vanam ◽  
...  

Abstract Well testing is one of the vital process as part of reservoir performance monitoring. As field matures with increase in number of well stock, testing becomes tedious job in terms of resources (MPFM and test separators) and this affect the production quota delivery. In addition, the test data validation and approval follow a business process that needs up to 10 days before to accept or reject the well tests. The volume of well tests conducted were almost 10,000 and out of them around 10 To 15 % of tests were rejected statistically per year. The objective of the paper is to develop a methodology to reduce well test rejections and timely raising the flag for operator intervention to recommence the well test. This case study was applied in a mature field, which is producing for 40 years that has good volume of historical well test data is available. This paper discusses the development of a data driven Well test data analyzer and Optimizer supported by artificial intelligence (AI) for wells being tested using MPFM in two staged approach. The motivating idea is to ingest historical, real-time data, well model performance curve and prescribe the quality of the well test data to provide flag to operator on real time. The ML prediction results helps testing operations and can reduce the test acceptance turnaround timing drastically from 10 days to hours. In Second layer, an unsupervised model with historical data is helping to identify the parameters that affecting for rejection of the well test example duration of testing, choke size, GOR etc. The outcome from the modeling will be incorporated in updating the well test procedure and testing Philosophy. This approach is being under evaluation stage in one of the asset in ADNOC Onshore. The results are expected to be reducing the well test rejection by at least 5 % that further optimize the resources required and improve the back allocation process. Furthermore, real time flagging of the test Quality will help in reduction of validation cycle from 10 days hours to improve the well testing cycle process. This methodology improves integrated reservoir management compliance of well testing requirements in asset where resources are limited. This methodology is envisioned to be integrated with full field digital oil field Implementation. This is a novel approach to apply machine learning and artificial intelligence application to well testing. It maximizes the utilization of real-time data for creating advisory system that improve test data quality monitoring and timely decision-making to reduce the well test rejection.


Author(s):  
Ahmed Imteaj ◽  
M. Hadi Amini

Federated Learning (FL) is a recently invented distributed machine learning technique that allows available network clients to perform model training at the edge, rather than sharing it with a centralized server. Unlike conventional distributed machine learning approaches, the hallmark feature of FL is to allow performing local computation and model generation on the client side, ultimately protecting sensitive information. Most of the existing FL approaches assume that each FL client has sufficient computational resources and can accomplish a given task without facing any resource-related issues. However, if we consider FL for a heterogeneous Internet of Things (IoT) environment, a major portion of the FL clients may face low resource availability (e.g., lower computational power, limited bandwidth, and battery life). Consequently, the resource-constrained FL clients may give a very slow response, or may be unable to execute expected number of local iterations. Further, any FL client can inject inappropriate model during a training phase that can prolong convergence time and waste resources of all the network clients. In this paper, we propose a novel tri-layer FL scheme, Federated Proximal, Activity and Resource-Aware 31 Lightweight model (FedPARL), that reduces model size by performing sample-based pruning, avoids misbehaved clients by examining their trust score, and allows partial amount of work by considering their resource-availability. The pruning mechanism is particularly useful while dealing with resource-constrained FL-based IoT (FL-IoT) clients. In this scenario, the lightweight training model will consume less amount of resources to accomplish a target convergence. We evaluate each interested client's resource-availability before assigning a task, monitor their activities, and update their trust scores based on their previous performance. To tackle system and statistical heterogeneities, we adapt a re-parameterization and generalization of the current state-of-the-art Federated Averaging (FedAvg) algorithm. The modification of FedAvg algorithm allows clients to perform variable or partial amounts of work considering their resource-constraints. We demonstrate that simultaneously adapting the coupling of pruning, resource and activity awareness, and re-parameterization of FedAvg algorithm leads to more robust convergence of FL in IoT environment.


Author(s):  
Alban Farchi ◽  
Patrick Laloyaux ◽  
Massimo Bonavita ◽  
Marc Bocquet

<p>Recent developments in machine learning (ML) have demonstrated impressive skills in reproducing complex spatiotemporal processes. However, contrary to data assimilation (DA), the underlying assumption behind ML methods is that the system is fully observed and without noise, which is rarely the case in numerical weather prediction. In order to circumvent this issue, it is possible to embed the ML problem into a DA formalism characterised by a cost function similar to that of the weak-constraint 4D-Var (Bocquet et al., 2019; Bocquet et al., 2020). In practice ML and DA are combined to solve the problem: DA is used to estimate the state of the system while ML is used to estimate the full model. </p><p>In realistic systems, the model dynamics can be very complex and it may not be possible to reconstruct it from scratch. An alternative could be to learn the model error of an already existent model using the same approach combining DA and ML. In this presentation, we test the feasibility of this method using a quasi geostrophic (QG) model. After a brief description of the QG model model, we introduce a realistic model error to be learnt. We then asses the potential of ML methods to reconstruct this model error, first with perfect (full and noiseless) observation and then with sparse and noisy observations. We show in either case to what extent the trained ML models correct the mid-term forecasts. Finally, we show how the trained ML models can be used in a DA system and to what extent they correct the analysis.</p><p>Bocquet, M., Brajard, J., Carrassi, A., and Bertino, L.: Data assimilation as a learning tool to infer ordinary differential equation representations of dynamical models, Nonlin. Processes Geophys., 26, 143–162, 2019</p><p>Bocquet, M., Brajard, J., Carrassi, A., and Bertino, L.: Bayesian inference of chaotic dynamics by merging data assimilation, machine learning and expectation-maximization, Foundations of Data Science, 2 (1), 55-80, 2020</p><p>Farchi, A., Laloyaux, P., Bonavita, M., and Bocquet, M.: Using machine learning to correct model error in data assimilation and forecast applications, arxiv:2010.12605, submitted. </p>


Sign in / Sign up

Export Citation Format

Share Document