history matching
Recently Published Documents


TOTAL DOCUMENTS

2044
(FIVE YEARS 469)

H-INDEX

42
(FIVE YEARS 8)

Lithosphere ◽  
2022 ◽  
Vol 2022 (Special 1) ◽  
Author(s):  
Yingfei Sui ◽  
Chuanzhi Cui ◽  
Zhen Wang ◽  
Yong Yang ◽  
Peifeng Jia

Abstract The interlayer interference is very serious in the process of water flooding development, especially when the reservoir adopts commingling production. The implementation of various interlayer interference mitigation measures requires that the production performance parameters and remaining oil distribution of each layer of the reservoir should be clearly defined, and the accurate production splitting of oil wells is the key. In this paper, the five-spot pattern is simplified to a single well production model of commingled production centered on oil well. The accurate production splitting results are obtained through automatic history matching of single well production performance. The comparison between the calculation results of this method and that of reservoir numerical simulation shows that the method is simple, accurate, and practical. In the field application, for the multilayer commingled production reservoir without accurate numerical simulation, this method can quickly and accurately realize the production splitting of the reservoir according to the development performance data.


2021 ◽  
pp. 1-20
Author(s):  
Youjun Lee ◽  
Byeongcheol Kang ◽  
Joonyi Kim ◽  
Jonggeun Choe

Abstract Reservoir characterization is one of the essential procedures for decision makings. However, conventional inversion methods of history matching have several inevitable issues of losing geological information and poor performances when it is applied to channel reservoirs. Therefore, we propose a model regeneration scheme for reliable uncertainty quantification of channel reservoirs without conventional model inversion methods. The proposed method consists of three parts: feature extraction, model selection, and model generation. In the feature extraction part, drainage area localization and discrete cosine transform are adopted for channel feature extraction in near-wellbore area. In the model selection part, K-means clustering and an ensemble ranking method are utilized to select models that have similar characteristics to a true reservoir. In the last part, deep convolutional generative adversarial networks (DCGAN) and transfer learning are applied to generate new models similar to the selected models. After the generation, we repeat the model selection process to select final models from the selected and the generated models. We utilize these final models to quantify uncertainty of a channel reservoir by predicting their future productions. After appling the proposed scheme to 3 different channel fields, it provides reliable models for production forecasts with reduced uncertainty. The analyses show that the scheme can effectively characterize channel features and increase a probability of existence of models similar to a true model.


Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3289
Author(s):  
Emil N. Musakaev ◽  
Sergey P. Rodionov ◽  
Nail G. Musakaev

A three-dimensional numerical hydrodynamic model fairly accurately describes the processes of developing oil and gas fields, and has good predictive properties only if there are high-quality input data and comprehensive information about the reservoir. However, under conditions of high uncertainty of the input data, measurement errors, significant time and resource costs for processing and analyzing large amounts of data, the use of such models may be unreasonable and can lead to ill-posed problems: either the uniqueness of the solution or its stability is violated. A well-known method for dealing with these problems is regularization or the method of adding some additional a priori information. In contrast to full-scale modeling, currently there is active development of reduced-physics models, which are used, first of all, in conditions when it is required to make an operational decision, and computational resources are limited. One of the most popular simplified models is the material balance model, which makes it possible to directly capture the relationship between reservoir pressure, flow rates and the integral reservoir characteristics. In this paper, it is proposed to consider a hierarchical approach when solving the problem of oil field waterflooding control using material balance models in successive approximations: first for the field as a whole, then for hydrodynamically connected blocks of the field, then for wells. When moving from one level of model detailing to the next, the modeling results from the previous levels of the hierarchy are used in the form of additional regularizing information, which ultimately makes it possible to correctly solve the history matching problem (identification of the filtration model) in conditions of incomplete input information.


2021 ◽  
Author(s):  
Son Hoang ◽  
Tung Tran ◽  
Tan Nguyen ◽  
Tu Truong ◽  
Duy Pham ◽  
...  

Abstract This paper reports a successful case study of applying machine learning to improve the history matching process, making it easier, less time-consuming, and more accurate, by determining whether Local Grid Refinement (LGR) with transmissibility multiplier is needed to history match gas-condensate wells producing from geologically complex reservoirs as well as determining the required LGR setup to history match those gas-condensate producers. History matching Hai Thach gas-condensate production wells is extremely challenging due to the combined effect of condensate banking, sub-seismic fault network, complex reservoir distribution and connectivity, uncertain HIIP, and lack of PVT data for most reservoirs. In fact, for some wells, many trial simulation runs were conducted before it became clear that LGR with transmissibility multiplier was required to obtain good history matching. In order to minimize this time-consuming trial-and-error process, machine learning was applied in this study to analyze production data using synthetic samples generated by a very large number of compositional sector models so that the need for LGR could be identified before the history matching process begins. Furthermore, machine learning application could also determine the required LGR setup. The method helped provide better models in a much shorter time, and greatly improved the efficiency and reliability of the dynamic modeling process. More than 500 synthetic samples were generated using compositional sector models and divided into separate training and test sets. Multiple classification algorithms such as logistic regression, Gaussian Naive Bayes, Bernoulli Naive Bayes, multinomial Naive Bayes, linear discriminant analysis, support vector machine, K-nearest neighbors, and Decision Tree as well as artificial neural networks were applied to predict whether LGR was used in the sector models. The best algorithm was found to be the Decision Tree classifier, with 100% accuracy on the training set and 99% accuracy on the test set. The LGR setup (size of LGR area and range of transmissibility multiplier) was also predicted best by the Decision Tree classifier with 91% accuracy on the training set and 88% accuracy on the test set. The machine learning model was validated using actual production data and the dynamic models of history-matched wells. Finally, using the machine learning prediction on wells with poor history matching results, their dynamic models were updated and significantly improved.


2021 ◽  
Author(s):  
Mokhles Mezghani ◽  
Mustafa AlIbrahim ◽  
Majdi Baddourah

Abstract Reservoir simulation is a key tool for predicting the dynamic behavior of the reservoir and optimizing its development. Fine scale CPU demanding simulation grids are necessary to improve the accuracy of the simulation results. We propose a hybrid modeling approach to minimize the weight of the full physics model by dynamically building and updating an artificial intelligence (AI) based model. The AI model can be used to quickly mimic the full physics (FP) model. The methodology that we propose consists of starting with running the FP model, an associated AI model is systematically updated using the newly performed FP runs. Once the mismatch between the two models is below a predefined cutoff the FP model is switch off and only the AI model is used. The FP model is switched on at the end of the exercise either to confirm the AI model decision and stop the study or to reject this decision (high mismatch between FP and AI model) and upgrade the AI model. The proposed workflow was applied to a synthetic reservoir model, where the objective is to match the average reservoir pressure. For this study, to better account for reservoir heterogeneity, fine scale simulation grid (approximately 50 million cells) is necessary to improve the accuracy of the reservoir simulation results. Reservoir simulation using FP model and 1024 CPUs requires approximately 14 hours. During this history matching exercise, six parameters have been selected to be part of the optimization loop. Therefore, a Latin Hypercube Sampling (LHS) using seven FP runs is used to initiate the hybrid approach and build the first AI model. During history matching, only the AI model is used. At the convergence of the optimization loop, a final FP model run is performed either to confirm the convergence for the FP model or to re iterate the same approach starting from the LHS around the converged solution. The following AI model will be updated using all the FP simulations done in the study. This approach allows the achievement of the history matching with very acceptable quality match, however with much less computational resources and CPU time. CPU intensive, multimillion-cell simulation models are commonly utilized in reservoir development. Completing a reservoir study in acceptable timeframe is a real challenge for such a situation. The development of new concepts/techniques is a real need to successfully complete a reservoir study. The hybrid approach that we are proposing is showing very promising results to handle such a challenge.


2021 ◽  
Author(s):  
Samuel Aderemi ◽  
Husain Ali Al Lawati ◽  
Mansura Khalfan Al Rawahy ◽  
Hassan Kolivand ◽  
Manish Kumar Singh ◽  
...  

Abstract This paper presents an innovative and practical workflow framework implemented in an Oman southern asset. The asset consists of three isolated accumulations or fields or structures that differ in rock and fluid properties. Each structure has multiple stacked members of Gharif and Alkhlata formations. Oil production started in 1986, with more than 60 commingling wells. The accumulations are not only structurally and stratigraphically complicated but also dynamically complex with numerous input uncertainties. It was impossible to assist the history matching process using a modern optimization-based technique due to the structural complexities of the reservoirs and magnitudes of the uncertain parameters. A structured history-matching approach, Stratigraphic Method (SM), was adopted and guided by suitable subsurface physics by adjusting multi-uncertain parameters simultaneously within the uncertainty envelope to mimic the model response. An essential step in this method is the preliminary analysis, which involved integrating various geological and engineering data to understand the reservoir behavior and the physics controlling the reservoir dynamics. The first step in history-matching these models was to adjust the critical water saturation to correct the numerical water production by honoring the capillary-gravity equilibrium and reservoir fluid flow dynamics. The significance of adjusting the critical water saturation before modifying other parameters and the causes of this numerical water production is discussed. Subsequently, the other major uncertain parameters were identified and modified, while a localized adjustment was avoided except in two wells. This local change was guided by a streamlined technique to ensure minimal model modification and retain geological realism. Overall, acceptable model calibration results were achieved. The history-matching framework's novelty is how the numerical water production was controlled above the transition zone and how the reservoir dynamics were understood from the limited data.


2021 ◽  
Author(s):  
Ryan Santoso ◽  
Xupeng He ◽  
Marwa Alsinan ◽  
Ruben Figueroa Hernandez ◽  
Hyung Kwak ◽  
...  

Abstract History matching is a critical step within the reservoir management process to synchronize the simulation model with the production data. The history-matched model can be used for planning optimum field development and performing optimization and uncertainty quantifications. We present a novel history matching workflow based on a Bayesian framework that accommodates subsurface uncertainties. Our workflow involves three different model resolutions within the Bayesian framework: 1) a coarse low-fidelity model to update the prior range, 2) a fine low-fidelity model to represent the high-fidelity model, and 3) a high-fidelity model to re-construct the real response. The low-fidelity model is constructed by a multivariate polynomial function, while the high-fidelity model is based on the reservoir simulation model. We firstly develop a coarse low-fidelity model using a two-level Design of Experiment (DoE), which aims to provide a better prior. We secondly use Latin Hypercube Sampling (LHS) to construct the fine low-fidelity model to be deployed in the Bayesian runs, where we use the Metropolis-Hastings algorithm. Finally, the posterior is fed into the high-fidelity model to evaluate the matching quality. This work demonstrates the importance of including uncertainties in history matching. Bayesian provides a robust framework to allow uncertainty quantification within the reservoir history matching. Under uniform prior, the convergence of the Bayesian is very sensitive to the parameter ranges. When the solution is far from the mean of the parameter ranges, the Bayesian introduces bios and deviates from the observed data. Our results show that updating the prior from the coarse low-fidelity model accelerates the Bayesian convergence and improves the matching convergence. Bayesian requires a huge number of runs to produce an accurate posterior. Running the high-fidelity model multiple times is expensive. Our workflow tackles this problem by deploying a fine low-fidelity model to represent the high-fidelity model in the main runs. This fine low-fidelity model is fast to run, while it honors the physics and accuracy of the high-fidelity model. We also use ANOVA sensitivity analysis to measure the importance of each parameter. The ranking gives awareness to the significant ones that may contribute to the matching accuracy. We demonstrate our workflow for a geothermal reservoir with static and operational uncertainties. Our workflow produces accurate matching of thermal recovery factor and produced-enthalpy rate with physically-consistent posteriors. We present a novel workflow to account for uncertainty in reservoir history matching involving multi-resolution interaction. The proposed method is generic and can be readily applied within existing history-matching workflows in reservoir simulation.


Sign in / Sign up

Export Citation Format

Share Document