Subsurface Back Allocation: Calculating Production and Injection Allocation by Layer in a Multilayered Waterflood Using a Combination of Machine Learning and Reservoir Physics

2021 ◽  
Author(s):  
Javad Rafiee ◽  
Carlos Mario Calad Serrano ◽  
Pallav Sarma ◽  
Sebatian Plotno ◽  
Fernando Gutierrez

Abstract Allocation of injection and production by layer is required for several production and reservoir engineering workflows including reserves estimation, water injection conformance, identification of workover and infill drilling candidates, etc. In cases of commingled production, allocation to layers is unknown; running production logging tools is expensive and not always possible. The current industry practice utilizes simplified approaches such as K*H based allocation which provides a static and inaccurate approximation of the allocation factors; this manual approach requires trial and error and can take several weeks in complex fields. This paper presents a novel technique to solve this problem using a combination of reservoir physics and machine learning. The methodology is made up of four stages: Data Entry: includes production at well level (commingled), injection at layer level and injection patterns or a connectivity map (optional) Gross Match: in order to match gross production for each well, the tool solves for time-varying layer-level injection allocation factors using a total material balance equation across all wells. Phase Match: having the allocation factors from the previous step, the tool automatically tunes various petrophysical parameters (i.e. porosity, relative permeability, etc.) in the physics model for each injector-producer pair across all the connected layers to match the oil and water production in each producer. An ensemble of several models can be run simultaneously to account for the probabilistic nature of the problem. Output: The steps 2 and 3 can be performed at pattern level for all connected patterns or for the whole field. The application of the technology in a complex field with 80+ layers in Southern Argentina is demonstrated as a case study of the benefits of the adoption of the technology.

2021 ◽  
Author(s):  
Vu-Linh Nguyen ◽  
Mohammad Hossein Shaker ◽  
Eyke Hüllermeier

AbstractVarious strategies for active learning have been proposed in the machine learning literature. In uncertainty sampling, which is among the most popular approaches, the active learner sequentially queries the label of those instances for which its current prediction is maximally uncertain. The predictions as well as the measures used to quantify the degree of uncertainty, such as entropy, are traditionally of a probabilistic nature. Yet, alternative approaches to capturing uncertainty in machine learning, alongside with corresponding uncertainty measures, have been proposed in recent years. In particular, some of these measures seek to distinguish different sources and to separate different types of uncertainty, such as the reducible (epistemic) and the irreducible (aleatoric) part of the total uncertainty in a prediction. The goal of this paper is to elaborate on the usefulness of such measures for uncertainty sampling, and to compare their performance in active learning. To this end, we instantiate uncertainty sampling with different measures, analyze the properties of the sampling strategies thus obtained, and compare them in an experimental study.


i-com ◽  
2021 ◽  
Vol 20 (1) ◽  
pp. 19-32
Author(s):  
Daniel Buschek ◽  
Charlotte Anlauff ◽  
Florian Lachner

Abstract This paper reflects on a case study of a user-centred concept development process for a Machine Learning (ML) based design tool, conducted at an industry partner. The resulting concept uses ML to match graphical user interface elements in sketches on paper to their digital counterparts to create consistent wireframes. A user study (N=20) with a working prototype shows that this concept is preferred by designers, compared to the previous manual procedure. Reflecting on our process and findings we discuss lessons learned for developing ML tools that respect practitioners’ needs and practices.


2021 ◽  
Vol 11 (13) ◽  
pp. 5826
Author(s):  
Evangelos Axiotis ◽  
Andreas Kontogiannis ◽  
Eleftherios Kalpoutzakis ◽  
George Giannakopoulos

Ethnopharmacology experts face several challenges when identifying and retrieving documents and resources related to their scientific focus. The volume of sources that need to be monitored, the variety of formats utilized, and the different quality of language use across sources present some of what we call “big data” challenges in the analysis of this data. This study aims to understand if and how experts can be supported effectively through intelligent tools in the task of ethnopharmacological literature research. To this end, we utilize a real case study of ethnopharmacology research aimed at the southern Balkans and the coastal zone of Asia Minor. Thus, we propose a methodology for more efficient research in ethnopharmacology. Our work follows an “expert–apprentice” paradigm in an automatic URL extraction process, through crawling, where the apprentice is a machine learning (ML) algorithm, utilizing a combination of active learning (AL) and reinforcement learning (RL), and the expert is the human researcher. ML-powered research improved the effectiveness and efficiency of the domain expert by 3.1 and 5.14 times, respectively, fetching a total number of 420 relevant ethnopharmacological documents in only 7 h versus an estimated 36 h of human-expert effort. Therefore, utilizing artificial intelligence (AI) tools to support the researcher can boost the efficiency and effectiveness of the identification and retrieval of appropriate documents.


Energies ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 1377
Author(s):  
Musaab I. Magzoub ◽  
Raj Kiran ◽  
Saeed Salehi ◽  
Ibnelwaleed A. Hussein ◽  
Mustafa S. Nasser

The traditional way to mitigate loss circulation in drilling operations is to use preventative and curative materials. However, it is difficult to quantify the amount of materials from every possible combination to produce customized rheological properties. In this study, machine learning (ML) is used to develop a framework to identify material composition for loss circulation applications based on the desired rheological characteristics. The relation between the rheological properties and the mud components for polyacrylamide/polyethyleneimine (PAM/PEI)-based mud is assessed experimentally. Four different ML algorithms were implemented to model the rheological data for various mud components at different concentrations and testing conditions. These four algorithms include (a) k-Nearest Neighbor, (b) Random Forest, (c) Gradient Boosting, and (d) AdaBoosting. The Gradient Boosting model showed the highest accuracy (91 and 74% for plastic and apparent viscosity, respectively), which can be further used for hydraulic calculations. Overall, the experimental study presented in this paper, together with the proposed ML-based framework, adds valuable information to the design of PAM/PEI-based mud. The ML models allowed a wide range of rheology assessments for various drilling fluid formulations with a mean accuracy of up to 91%. The case study has shown that with the appropriate combination of materials, reasonable rheological properties could be achieved to prevent loss circulation by managing the equivalent circulating density (ECD).


Sign in / Sign up

Export Citation Format

Share Document