history match
Recently Published Documents


TOTAL DOCUMENTS

171
(FIVE YEARS 37)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Samat Ramatullayev ◽  
Shi Su ◽  
Coriolan Rat ◽  
Alaa Maarouf ◽  
Monica Mihai ◽  
...  

Abstract Brownfield field development plans (FDP) must be revisited on a regular basis to ensure the generation of production enhancement opportunities and to unlock challenging untapped reserves. However, for decades, the conventional workflows have remained largely unchanged, inefficient, and time-consuming. The aim of this paper is to demonstrate that combination of the cutting-edge cloud computing technology along with artificial intelligence (AI) and machine learning (ML) solutions enable an optimization plan to be delivered in weeks rather than months with higher confidence. During this FDP optimization process, every stage necessitates the use of smart components (AI & ML techniques) starting from reservoir/production data analytics to history match and forecast. A combined cloud computing and AI solutions are introduced. First, several static and dynamic uncertainty parameters are identified, which are inherited from static modelling and the history match. Second, the elastic cloud computing technology is harnessed to perform hundreds to thousands of history match scenarios with the uncertainty parameters in a much shorter period. Then AI techniques are applied to extract the dominant key features and determine the most likely values. During the FDP optimization process, the data liberation paved the way for intelligent well placement which identifies the "sweet spots" using a probabilistic approach, facilitating the identification and quantification of by-passed oil. The use of AI-assisted analytics revealed how the gas-oil ratio behavior of various wells drilled at various locations in the field changed over time. It also explained why this behavior was observed in one region of the reservoir when another nearby reservoir was not suffering from the same phenomenon. The cloud computing technology allowed to screen hundreds of uncertainty cases using high-resolution reservoir simulator within an hour. The results of the screening runs were fed into an AI optimizer, which produced the best possible combination of uncertainty parameters, resulting in an ensemble of history-matched cases with the lowest mismatch objective functions. We used an intuitive history matching analysis solution that can visualize mismatch quality of all wells of various parameters in an automated manner to determine the history matching quality of an ensemble of cases. Finally, the cloud ecosystem's data liberation capability enabled the implementation of an intelligent algorithm for the identification of new infill wells. The approach serves as a benchmark for optimizing FDP of any reservoir by orders of magnitude faster compared to conventional workflows. The methodology is unique in that it uses cloud computing technology and cutting-edge AI methods to create an integrated intelligent framework for FDP that generates rapid insights and reliable results, accelerates decision making, and speeds up the entire process by orders of magnitude.


2021 ◽  
Author(s):  
Giorgio Fighera ◽  
Ernesto Della Rossa ◽  
Patrizia Anastasi ◽  
Mohammed Amr Aly ◽  
Tiziano Diamanti

Abstract Improvements in reservoir simulation computational time thanks to GPU-based simulators and the increasing computational power of modern HPC systems, are paving the way for a massive employment of Ensemble History Matching (EHM) techniques which are intrinsically parallel. Here we present the results of a comparative study between a newly developed EHM tool that aims at leveraging the GPU parallelism, and a commercial third-party EHM software as a benchmark. Both are tested on a real case. The reservoir chosen for the comparison has a production history of 3 years with 15 wells between oil producers, and water and gas injectors. The EHM algorithm used is the Ensemble Smoother with Multiple Data Assimilations (ESMDA) and both tools have access to the same computational resources. The EHM problem was stated in the same way for both tools. The objective function considers well oil productions, water cuts, bottom-hole pressures, and gas-oil-ratios. Porosity and horizontal permeability are used as 3D grid parameters in the update algorithm, along with nine scalar parameters for anisotropy ratios, Corey exponents, and fault transmissibility multipliers. Both the presented tool and the benchmark obtained a satisfactory history match quality. The benchmark tool took around 11.2 hours to complete, while the proposed tool took only 1.5 hours. The two tools performed similar updates on the scalar parameters with only minor discrepancies. Updates on the 3D grid properties instead show significant local differences. The updated ensemble for the benchmark reached extreme values for porosity and permeability which are also distributed in a heterogeneous way. These distributions are quite unlikely in some model regions given the initial geological characterization of the reservoir. The updated ensemble for the presented tool did not reach extreme values in neither porosity nor permeability. The resulting property distributions are not so far off from the ones of the initial ensemble, therefore we can conclude that we were able to successfully update the ensemble while persevering the geological characterization of the reservoir. Analysis suggests that this discrepancy is due to the different way by which our EHM code consider inactive cells in the grid update calculations compared to the benchmark highlighting the fact that statistics including inactive cells should be carefully managed to correctly preserve the geological distribution represented in the initial ensemble. The presented EHM tool was developed from scratch to be fully parallel and to leverage on the abundantly available computational resources. Moreover, the ESMDA implementation was tweaked to improve the reservoir update by carefully managing inactive cells. A comparison against a benchmark showed that the proposed EHM tool achieved similar history match quality while improving the computation time and the geological realism of the updated ensemble.


2021 ◽  
Author(s):  
M. A. Borregales Reverón ◽  
H. H. Holm ◽  
O. Møyner ◽  
S. Krogstad ◽  
K.-A. Lie

Abstract The Ensemble Smoother with Multiple Data Assimilation (ES-MDA) method has been popular for petroleum reservoir history matching. However, the increasing inclusion of automatic differentiation in reservoir models opens the possibility to history-match models using gradient-based optimization. Here, we discuss, study, and compare ES-MDA and a gradient-based optimization for history-matching waterflooding models. We apply these two methods to history match reduced GPSNet-type models. To study the methods, we use an implementation of ES-MDA and a gradient-based optimization in the open-source MATLAB Reservoir Simulation Toolbox (MRST), and compare the methods in terms of history-matching quality and computational efficiency. We show complementary advantages of both ES-MDA and gradient-based optimization. ES-MDA is suitable when an exact gradient is not available and provides a satisfactory forecast of future production that often envelops the reference history data. On the other hand, gradient-based optimization is efficient if the exact gradient is available, as it then requires a low number of model evaluations. If the exact gradient is not available, using an approximate gradient or ES-MDA are good alternatives and give equivalent results in terms of computational cost and quality predictions.


2021 ◽  
Author(s):  
Son K. Hoang ◽  
Tung V. Tran ◽  
Tan N. Nguyen ◽  
Tu A. Truong ◽  
Duy H. Pham ◽  
...  

Abstract This study aims to apply machine learning (ML) to make history matching (HM) process easier, faster, more accurate, and more reliable by determining whether Local Grid Refinement (LGR) with transmissibility multiplier is needed to history match gas-condensate wells producing from geologically complex reservoirs and determining how LGR should be set up to successfully history match those production wells. The main challenges for HM gas-condensate production from Hai Thach wells are large effect of condensate banking (condensate blockage), flow baffles by the sub-seismic fault network, complex reservoir distribution and connectivity, highly uncertain HIIP, and lack of PVT information for most reservoirs. In this study, ML was applied to analyze production data using synthetic samples generated by a very large number of compositional sector models so that the need for LGR could be identified before the HM process and the required LGR setup could also be determined. The proposed method helped provide better models in a much shorter time, and improved the efficiency and reliability of the dynamic modeling process. 500+ synthetic samples were generated using compositional sector models and divided into training and test sets. Supervised classification algorithms including logistic regression, Gaussian, Bernoulli, and multinomial Naïve Bayes, linear discriminant analysis, support vector machine, K-nearest neighbors, and Decision Tree as well as ANN were applied to the data sets to determine the need for using LGR in HM. The best algorithm was found to be the Decision Tree classifier, with 100% and 99% accuracy on the training and the test sets, respectively. The size of the LGR area could also be determined reasonably well at 89% and 87% accuracy on the training and the test sets, respectively. The range of the transmissibility multiplier could also be determined reasonably well at 97% and 91% accuracy on the training and the test sets, respectively. Moreover, the ML model was validated using actual production and HM data. A new method of applying ML in dynamic modeling and HM of challenging gas-condensate wells in geologically complex reservoirs has been successfully applied to the high-pressure high-temperature Hai Thach field offshore Vietnam. The proposed method helped reduce many trial and error simulation runs and provide better and more reliable dynamic models.


2021 ◽  
Author(s):  
Srungeer Simha ◽  
Manu Ujjwal ◽  
Gaurav Modi

Abstract Capacitance resistance modeling (CRM) is a data-driven analytical technique for waterflood optimization developed in the early 2000s. The popular implementation uses only production/injection data as input and makes simplifying assumptions of pressure maintenance and injection being the primary driver of production. While these assumptions make CRM a quick plug & play type of technique that can easily be replicated between assets they also lead to major pitfalls, as these assumptions are often invalid. This study explores these pitfalls and discusses workarounds and mitigations to improve the reliability of CRM. CRM was used as a waterflood optimization technique for 3 onshore oil fields, each having 100s of active wells, multiple stacked reservoirs, and over 15 years of pattern waterflood development. The CRM algorithm was implemented in Python and consists of 4 modules: 1) Connectivity solver module – where connectivity between injectors and producers is quantified using a 2 year history match period, 2) Fractional Flow solver module – where oil rates are established as a function of injection rates, 3) Verification module – which is a blind test to assess history match quality, 4) Waterflood optimizer module – which redistributes water between injectors, subject to facility constraints and estimates potential oil gain. Additionally, CRM results were interpreted and validated using an integrated visualization dashboard. The two main issues encountered while using CRM in this study are 1) poor history match (HM) and 2) very high run time in the order of tens of hours due to the large number of wells. Poor HM was attributed to significant noise in the production data, aquifer support contributing to production, well interventions such as water shut-offs, re-perforation, etc. contributing to oil production. These issues were mitigated, and HM was improved using data cleaning techniques such as smoothening, outlier removal, and the usage of pseudo aquifer injectors for material balance. However, these techniques are not foolproof due to the nature of CRM which relies only on trends between producers and injectors for waterflood optimization. Runtime however was reduced to a couple of hours by breaking up the reservoir into sectors and using parallelization.


2021 ◽  
Author(s):  
Ali Al-Turki ◽  
Obai Alnajjar ◽  
Majdi Baddourah ◽  
Babatunde Moriwawon

Abstract The algorithms and workflows have been developed to couple efficient model parameterization with stochastic, global optimization using a Multi-Objective Genetic Algorithm (MOGA) for global history matching, and coupled with an advanced workflow for streamline sensitivity-based inversion for fine-tuning. During parameterization the low-rank subsets of most influencing reservoir parameters are identified and propagated to MOGA to perform the field-level history match. Data misfits between the field historical data and simulation data are calculated with multiple realizations of reservoir models that quantify and capture reservoir uncertainty. Each generation of the optimization algorithms reduces the data misfit relative to the previous iteration. This iterative process continues until a satisfactory field-level history match is reached or there are no further improvements. The fine-tuning process of well-connectivity calibration is then performed with a streamlined sensitivity-based inversion algorithm to locally update the model to reduce well-level mismatch. In this study, an application of the proposed algorithms and workflow is demonstrated for model calibration and history matching. The synthetic reservoir model used in this study is discretized into millions of grid cells with hundreds of producer and injector wells. It is designed to generate several decades of production and injection history to evaluate and demonstrate the workflow. In field-level history matching, reservoir rock properties (e.g., permeability, fault transmissibility, etc.) are parameterized to conduct the global match of pressure and production rates. Grid Connectivity Transform (GCT) was used and assessed to parameterize the reservoir properties. In addition, the convergence rate and history match quality of MOGA was assessed during the field (global) history matching. Also, the effectiveness of the streamline-based inversion was evaluated by quantifying the additional improvement in history matching quality per well. The developed parametrization and optimization algorithms and workflows revealed the unique features of each of the algorithms for model calibration and history matching. This integrated workflow has successfully defined and carried uncertainty throughout the history matching process. Following the successful field-level history match, the well-level history matching was conducted using streamline sensitivity-based inversion, which further improved the history match quality and conditioned the model to historical production and injection data. In general, the workflow results in enhanced history match quality in a shorter turnaround time. The geological realism of the model is retained for robust prediction and development planning.


2021 ◽  
Author(s):  
Nurudeen Oluwatosin Yusuf ◽  
Lynn Silpngarmlers

Abstract Reservoir-H sequence, comprising of three reservoirs (H1, H2 & H3) is one of the most complex reservoirs in Niger-delta. With a combined well-count in excess of sixty producers and injectors and a production history of more than fifty-five years, the reservoir has had a history of challenging simulation studies with average water-cut matches resulting in new wells having high water breakthrough from onset. In the latest effort, an assisted history match using genetic algorithm was employed. This approach is a two-step approach including an identification of all relevant history match parameters for the three reservoirs, followed by a fine-tuning of pressure and saturation history match using genetic algorithm. This approach enabled the identification of aquifer assumptions (architecture and transmissibility) as a critical factor in successfully matching the wells in these reservoirs. In addition to pressure and saturation matches, infill opportunities were further validated by tracking current reservoir fluid contacts with the model. The current model has significantly improved the overall water-cut match in more than ten wells that historically had water-breakthrough challenges while using principally global history-match parameters. The elimination of many local changes in the current model is expected to improve both the reliability and the shelf of the model. Also, the variance between estimated contacts compared to actual gas-oil and oil-water contacts around infill locations is less than five feet indicating good predictability of the model. In order to save development cost, multiple opportunities identified in these reservoirs are to be targeted with dual strings. Additional savings were realized by reducing the overall simulations studies timeline by four months.


Energies ◽  
2021 ◽  
Vol 14 (14) ◽  
pp. 4324
Author(s):  
Vladislav Brkić ◽  
Ivan Zelenika ◽  
Petar Mijić ◽  
Igor Medved

The storage of natural gas in geological structures such as depleted fields, aquifers and salt caverns plays an important role in a gas supply system as it balances the fluctuation of gas demand and price. Hydraulic loss due to fluid flow through gas storage production equipment and an interfering effect from nonequal productivity index of storage wells may have an important influence on gas storage performance. An integrated mathematical model is developed based on underground gas storage facility production data. Using this model, the hydraulic loss is determined. A real test case that consists of a gas storage reservoir linked to the surface facility is analysed. The mathematical model uses an experimentally determined pressure drop coefficient in chokes. The base case scenario created using real gas storage facility data enables the achievement of a good history match with the given parameters of the gas storage reservoir. Using the history match simulation case as an initial scenario (a base case), two different scenarios are created to determine the injection and withdrawal performance of the gas storage field. The results indicate that the pressure drop in chokes, when fully open as a constraints in an underground gas storage facility, has a significant impact on gas storage operations and deliverability.


2021 ◽  
Vol 40 (7) ◽  
pp. 494-501
Author(s):  
Jean-Paul van Gestel

In 2019, the fourth ocean-bottom-node survey was acquired over Atlantis Field. This survey was quickly processed to provide useful time-lapse (4D) observations two months after the end of the acquisition. The time-lapse observations were immediately valuable in placing wells, refining final drilling target locations, updating well prioritization, and sequencing production and water-injection wells. These data are indispensable pieces of information that bring geophysicists and reservoir engineers together and focus the conversation on key remaining uncertainties such as fault transmissibilities and drainage areas. Time-lapse observations can confirm the key conceptional models already in place but are even more valuable when they highlight alternative models that have not yet been considered. The lessons learned from the acquisition, processing, analysis, interpretation, and integration of the data are shared. Some of these lessons are reiterations of previous work, but several new lessons originated from the latest 2019 acquisition. This was the first survey in which independent simultaneous sources were successfully deployed to collect a time-lapse survey. This resulted in a much faster and less expensive acquisition. In addition, full-waveform inversion was used as the main tool to update the velocity model, enabling a much faster turnaround in processing. The fast turnaround enabled incorporation of the latest acquisition to better constrain the velocity model update. The updated velocity model was used for the final time-lapse migration. In the integration part, the 4D-assisted history-match workflow was engaged to update the reservoir model history match. All of the upgrades led to an overall faster, less expensive, and better way to incorporate the acquired data in the final business decisions.


Sign in / Sign up

Export Citation Format

Share Document