Gravity Drainage System: Investigation and Field Development Using Geological Modelling and Reservoir Simulation

2021 ◽  
Author(s):  
Obinna Somadina Ezeaneche ◽  
Robinson Osita Madu ◽  
Ishioma Bridget Oshilike ◽  
Orrelo Jerry Athoja ◽  
Mike Obi Onyekonwu

Abstract Proper understanding of reservoir producing mechanism forms a backbone for optimal fluid recovery in any reservoir. Such an understanding is usually fostered by a detailed petrophysical evaluation, structural interpretation, geological description and modelling as well as production performance assessment prior to history matching and reservoir simulation. In this study, gravity drainage mechanism was identified as the primary force for production in reservoir X located in Niger Delta province and this required proper model calibration using variation of vertical anisotropic ratio based on identified facies as against a single value method which does not capture heterogeneity properly. Using structural maps generated from interpretation of seismic data, and other petrophysical parameters from available well logs and core data such as porosity, permeability and facies description based on environment of deposition, a geological model capturing the structural dips, facies distribution and well locations was built. Dynamic modeling was conducted on the base case model and also on the low and high case conceptual models to capture different structural dips of the reservoir. The result from history matching of the base case model reveals that variation of vertical anisotropic ratio (i.e. kv/kh) based on identified facies across the system is more effective in capturing heterogeneity than using a deterministic value that is more popular. In addition, gas segregated fastest in the high case model with the steepest dip compared to the base and low case models. An improved dynamic model saturation match was achieved in line with the geological description and the observed reservoir performance. Quick wins scenarios were identified and this led to an additional reserve yield of over 1MMSTB. Therefore, structural control, facies type, reservoir thickness and nature of oil volatility are key forces driving the gravity drainage mechanism.

PETRO ◽  
2018 ◽  
Vol 4 (4) ◽  
Author(s):  
Muhamad Taufan Azhari

<p>Reservoir simulation is an area of reservoir engineering in which computer models are used to predict the flow of fluids through porous media. Reservoir simulation process starts with several steps; data preparation, model and grid construction, initialization, history matching and prediction. Initialization process is done for matching OOIP or total initial hydrocarbon which fill reservoir with hydrocarbon control volume with volumetric method.</p><p>To aim the best encouraging optimum data, these development scenarios of TR Field Layer X will be predicted for 30 years (from 2014 until January 2044). Development scenarios in this study consist of 4 scenarios : Scenario 1 (Base Case), Scenario 2 (Base Case + Reopening non-active wells), Scenario 3 (scenario 2 + infill production wells), Scenario 4 (Scenario 2 + 5 spot pattern of infill injection wells).</p>


1986 ◽  
Vol 26 (1) ◽  
pp. 447
Author(s):  
A.M. Younes ◽  
G.O. Morrell ◽  
A.B. Thompson

The West Kingfish Field in the Gippsland Basin, offshore Victoria, has been developed from the West King-fish platform by Esso Australia Ltd (operator) and BHP Petroleum.The structure is an essentially separate, largely stratigraphic accumulation that forms the western flank of the Kingfish feature. A total of 19 development wells were drilled from the West Kingfish platform between October 1982 and May 1984. Information provided by these wells was used in a West Kingfish post-development geologic study and a reservoir simulation study.As a result of these studies the estimated recoverable oil volume has been increased 55 per cent to 27.0 stock tank gigalitres (170 million stock tank barrels). The studies also formed the technical basis for obtaining new oil classification of the P-1.1 reservoir which is the only sand body that has been found in the Gurnard Formation in the Kingfish area.The simulation study was accomplished with an extremely high level of efficiency due to the extensive and effective use of computer graphics technology in model construction, history matching and predictions.Computer graphics technology has also been used very effectively in presenting the simulation study results in an understandable way to audiences with various backgrounds. A portable microcomputer has been used to store hundreds of graphic displays which are projected with a large screen video projector.Presentations using this new display technology have been well received and have been very successful in conveying the results of a complex reservoir simulation study and in identifying future field development opportunities to audiences with various backgrounds.


Author(s):  
Denis José Schiozer ◽  
Antonio Alberto de Souza dos Santos ◽  
Susana Margarida de Graça Santos ◽  
João Carlos von Hohendorff Filho

This work describes a new methodology for integrated decision analysis in the development and management of petroleum fields considering reservoir simulation, risk analysis, history matching, uncertainty reduction, representative models, and production strategy selection under uncertainty. Based on the concept of closed-loop reservoir management, we establish 12 steps to assist engineers in model updating and production optimization under uncertainty. The methodology is applied to UNISIM-I-D, a benchmark case based on the Namorado field in the Campos Basin, Brazil. The results show that the method is suitable for use in practical applications of complex reservoirs in different field stages (development and management). First, uncertainty is characterized in detail and then scenarios are generated using an efficient sampling technique, which reduces the number of evaluations and is suitable for use with numerical reservoir simulation. We then perform multi-objective history-matching procedures, integrating static data (geostatistical realizations generated using reservoir information) and dynamic data (well production and pressure) to reduce uncertainty and thus provide a set of matched models for production forecasts. We select a small set of Representative Models (RMs) for decision risk analysis, integrating reservoir, economic and other uncertainties to base decisions on risk-return techniques. We optimize the production strategies for (1) each individual RM to obtain different specialized solutions for field development and (2) all RMs simultaneously in a probabilistic procedure to obtain a robust strategy. While the second approach ensures the best performance under uncertainty, the first provides valuable insights for the expected value of information and flexibility analyses. Finally, we integrate reservoir and production systems to ensure realistic production forecasts. This methodology uses reservoir simulations, not proxy models, to reliably predict field performance. The proposed methodology is efficient, easy-to-use and compatible with real-time operations, even in complex cases where the computational time is restrictive.


2001 ◽  
Vol 4 (06) ◽  
pp. 502-508 ◽  
Author(s):  
W.J. Milliken ◽  
A.S. Emanuel ◽  
A. Chakravarty

Summary The use of 3D streamline methodologies as an alternative to finite-difference (FD) simulation has become more common in the oil industry during the past few years. When the assumptions for its application are satisfied, results from streamline simulation compare very well with those from FD and typically require less than 10% of the central processing unit (CPU) resources. The speed of 3D streamline simulation (3DSM) lends itself not just to simulation, but also to other components of the reservoir simulation work process. This characteristic is particularly true of history matching. History matching is frequently the most tedious and time-consuming part of a reservoir simulation study. In this paper, we describe a novel method that uses 3D streamline paths to assist in history matching either 3D streamline or FD models. We designated this technique Assisted History Matching (AHM) to distinguish it from automated history-matching techniques. In this manuscript, we describe this technique and its application to three reservoir simulation studies. The example models range in size from 105 to 106 gridblocks and contain as many as several hundred wells. These applications have led to refinements of the AHM methodology, the incorporation of several new algorithms, and some insights into the processes typically employed in history matching. Introduction The advent of powerful geostatistical modeling techniques has led to the development of very large (&gt;107 cells) geocellular reservoir models. These models capture, in greater detail than before, the heterogeneity in porosity, permeability, and lithology that is critical to accurate simulation of reservoir performance. Three-dimensional streamline simulation has received considerable attention over the past several years because of its potential as an alternative to traditional FD methods for the simulation of these very large models. While 3DSM is a powerful simulation tool, it also has a number of other uses. The speed of 3DSM is ideal for such applications as geologic/geostatistical model screening,1 reservoir scoping, and history matching (the focus of this paper). In this manuscript, we describe the technique and present three example reservoir applications that demonstrate its utility. The AHM Technique The models used in reservoir simulation today contain details of structure and heterogeneity that are orders of magnitude greater than those used just 10 years ago. However, there is still (and probably always will be) a large degree of uncertainty in the property descriptions. Geologic data are typically scattered and imprecise. Laboratory measurements of core properties, for example, often show an order of magnitude variation in permeability for any given porosity and several orders of magnitude variation over the data set. Upscaling replaces geologic detail with estimates of effective properties for aggregated data, placing another level of approximation on the resulting model. It is unlikely that any geologic model will match the observed reservoir performance perfectly, and history matching continues to be the technique by which the adjustments are made to the geologic model to achieve a match between model and historical reservoir performance. Ref. 2 provides a good presentation of traditional history-matching techniques. History matching by definition is an ill-posed problem: there are more unknowns than there are constraints to the problem. Indeed, any reservoir simulation engineer knows that there is always more than one way to history match a given reservoir model. It is the responsibility of the simulation engineer to make only those changes that are consistent with the reservoir geology. AHM was designed to facilitate these changes. As defined here, AHM is different from automated history matching and traditional history-matching techniques. Generically, traditional history matching involves five key steps:Simulation and identification of the difference between model predictions and observed performance.Determination of the gridblocks in the model that require change.Designation of the property(ies) that requires change and what those changes are.Implementation of the changes in the simulation input data.Iteration on the above steps until a satisfactory match is achieved. The two principal uncertainties in this process lie in Steps 2 and 3, both of which are empirical and tedious and frequently involve ad hoc decisions that have an unknown impact on the ultimate results. AHM is designed to simplify this process and uses 3DSM to facilitate Steps 2 and 3 and thus minimize the ad hoc nature of the process. AHM uses an underlying 3DSM model to determine the streamline paths in the reservoir. These streamlines describe the principal flow paths in the model and represent the paths along which the fluids in the model flow from source (injector or aquifer) to sink (producer). By tracing all the streamlines from a given well, the gridblocks through which the fluids flow to that well are identified. This process, in essence, replaces Step 2 with a process that is rooted in the fluid-flow calculation. Once these gridblocks are identified, changes can be performed according to any (geologically reasonable) algorithm desired. Here, a simple program that largely replaces Step 4 carries this out. Fig. 1 illustrates the concept. The AHM process is based on the assumption that history matching is achieved by altering the geologic properties along the flow paths connecting a producing well to its flow source. The source may be a water injector, gas injector, aquifer, or gas cap; however, the drive mechanism must be a displacement along a definable path. Because the technique relies upon identification of the flow paths, it is assumed that the grid is sufficiently detailed to resolve the flow paths. In very coarse grids, a single gridblock may intersect the flow to several wells, and satisfactory history matching in this case may not be possible with AHM. For streamline-simulation models, the calculation model provides the path directly. For FD simulation, a streamline model incorporating the same structure and geologic parameters as the simulation model is used to calculate the streamlines defining the flow paths.


PETRO ◽  
2018 ◽  
Vol 5 (1) ◽  
Author(s):  
Muhamad Taufan Azhari ◽  
Maman Djumantara

<div class="WordSection1"><p><strong>SARI</strong></p><p>Simulasi reservoir merupakan bagian dari ilmu teknik perminyakan, khususnya teknik reservoir dimana model komputer digunakan untuk memprediksikan aliran fluida melalui media yang bersifat <em>porous. </em>Proses suatu simulasi reservoir dimulai dengan beberapa langkah, yakni preparasi data, pembangunan model beserta <em>grid</em>, inisialisasi, penyelarasan data produksi dengan simulasi (<em>history matching</em>)., serta prediksi <em>performance </em>produksi model yang disimulasikan. Proses inisialisasi dilakukan untuk menyesuaikan nilai OOIP atau total hidrokarbon awal yang mengisi reservoir dengan nilai OOIP awal pada model static.</p><p>Untuk mendapatkan peramalan kinerja produksi yang akurat, rencana pengembangan Lapangan TR Lapisan X dilakukan dengan memprediksikan kinerja reservoir untuk berproduksi selama 30 tahun (sampai dengan Januari 2044). Pengembangan yang direncanakan pada penelitian ini berjumlah 4 skenario, yang terdiri dari skenario 1 (<em>Base Case</em>), skenario 2 (<em>Base Case </em>+ <em>Reopening </em>sumur yang non-aktif), skenario 3 (skenario 2 + <em>Infill </em>sumur produksi), skenario 4 (Skenario 2 + <em>infill </em>sumur injeksi pola <em>5 spot</em>).</p><p><strong>ABSTRACT</strong></p><p>Reservoir simulation is an area of reservoir engineering in which computer models are used to predict the flow of fluids through porous media. Reservoir simulation process starts with several steps; data preparation, model and grid construction, initialization, history matching and prediction. Initialization process is done for matching OOIP or total initial hydrocarbon which fill reservoir with hydrocarbon control volume with volumetric method.</p><p>To aim the best encouraging optimum data, these development scenarios of TR Field Layer X will be predicted for 30 years (from 2014 until January 2044). Development scenarios in this study consist of 4 scenarios : Scenario 1 (Base Case), Scenario 2 (Base Case + Reopening non-active wells), Scenario 3 (scenario 2 + infill production wells), Scenario 4 (Scenario 2 + 5 spot pattern of infill injection wells).</p><p>Keywords: reservoir simulation,reservoir simulator, history matching</p></div>


2021 ◽  
Author(s):  
E. Noviyanto

This paper presents the application of probabilistic history matching and prediction workflow in a real field case in Indonesia. The main objective of this novel approach is to capture the subsurface uncertainty for better reservoir understanding to be able to manage its risk and make a better decision for further field development. The field is very complex, with updated geological concept of multi-level reservoirs that has more than a hundred of wells and has been producing for 70 years. Existing multi-realization of static reservoir model was built to determine range of probabilistic cases of In-Place calculation as output. Variation of fluid contacts, lithology/facies distribution, porosity distribution and Net to Gross map are the main differences among these cases. Structural model and reservoir properties from three pre-defined cases were imported to the integrated software modelling tool, excluding water saturation model. The static-dynamic model building process were then recorded under common workflow for integration and automation of rebuilding variation model. For effective probabilistic model initialization,an automatic capillary pressure adjustment was chosen. Subsequently, experimental design and optimization were run to manage probabilistic history matching effectively. Parameter screening and ranking tool were also used to update uncertainty design for the next iteration. The number of history match variants were managed by applying acceptable match criteria and clusterization. Twenty equiprobable history matching variants were selected to be carried over to prediction phase and the three selected remaining oil saturation distribution maps were assessed for waterflood pattern design. Having reduced the uncertainty of parameters by history matching process, the prediction of base case and waterflood scenario were run for twenty unique variants. Incremental cumulative oil is in the range of 14.81 MMSTB to 16.96 MMSTB, equivalent to incremental recovery factor 5% to 5.4%. This range represents static and dynamic input parameter uncertainty that examined in this study. High side of recovery factor from waterflood scenario is 21.6% which indicates many remaining unswept oils. These results were used for work activity recommendation in the future to recover more hydrocarbon from the 70 years old oil field. This paper demonstrates the first application of probabilistic dynamic modelling in the company including a first-step endeavour to integrate static and dynamic variable uncertainty for this field. The workflow will be used as a guideline process for other field applications in the future.


2021 ◽  
Author(s):  
Ryan Santoso ◽  
Xupeng He ◽  
Marwa Alsinan ◽  
Ruben Figueroa Hernandez ◽  
Hyung Kwak ◽  
...  

Abstract History matching is a critical step within the reservoir management process to synchronize the simulation model with the production data. The history-matched model can be used for planning optimum field development and performing optimization and uncertainty quantifications. We present a novel history matching workflow based on a Bayesian framework that accommodates subsurface uncertainties. Our workflow involves three different model resolutions within the Bayesian framework: 1) a coarse low-fidelity model to update the prior range, 2) a fine low-fidelity model to represent the high-fidelity model, and 3) a high-fidelity model to re-construct the real response. The low-fidelity model is constructed by a multivariate polynomial function, while the high-fidelity model is based on the reservoir simulation model. We firstly develop a coarse low-fidelity model using a two-level Design of Experiment (DoE), which aims to provide a better prior. We secondly use Latin Hypercube Sampling (LHS) to construct the fine low-fidelity model to be deployed in the Bayesian runs, where we use the Metropolis-Hastings algorithm. Finally, the posterior is fed into the high-fidelity model to evaluate the matching quality. This work demonstrates the importance of including uncertainties in history matching. Bayesian provides a robust framework to allow uncertainty quantification within the reservoir history matching. Under uniform prior, the convergence of the Bayesian is very sensitive to the parameter ranges. When the solution is far from the mean of the parameter ranges, the Bayesian introduces bios and deviates from the observed data. Our results show that updating the prior from the coarse low-fidelity model accelerates the Bayesian convergence and improves the matching convergence. Bayesian requires a huge number of runs to produce an accurate posterior. Running the high-fidelity model multiple times is expensive. Our workflow tackles this problem by deploying a fine low-fidelity model to represent the high-fidelity model in the main runs. This fine low-fidelity model is fast to run, while it honors the physics and accuracy of the high-fidelity model. We also use ANOVA sensitivity analysis to measure the importance of each parameter. The ranking gives awareness to the significant ones that may contribute to the matching accuracy. We demonstrate our workflow for a geothermal reservoir with static and operational uncertainties. Our workflow produces accurate matching of thermal recovery factor and produced-enthalpy rate with physically-consistent posteriors. We present a novel workflow to account for uncertainty in reservoir history matching involving multi-resolution interaction. The proposed method is generic and can be readily applied within existing history-matching workflows in reservoir simulation.


2021 ◽  
Author(s):  
Nis Ilyani Mohmad ◽  
Danu Ismadi ◽  
Nor Hajjar Salleh ◽  
Amirul Nur Romle ◽  
Syarifah Puteh Syed Abd Rahim ◽  
...  

Abstract History matching is one of the paramount steps in reservoir model validation to describe, analyze and mimic the overall behavior of reservoir performance. Performing history matching on highly faulted and multi layered reservoirs is always challenging, especially when the wells are completed with multiple zones either with single selective or dual strings. The history matching complexity is also compounded with uncertainties in production allocation, well history and downhole equipment integrity overtime. It is a common practice for deterministic history matching in reservoir numerical simulation to modify the both static and dynamic model parameters within the subsurface uncertainty window. However, for multi layered reservoirs completed with dual strings, another parameter that is most often get overlooked is the completion string’s leaking phenomenon that tremendously impacting the history matching. The objective of this paper is to introduce dual strings leaking diagnostics methodology from various disciplines’ angles. We demonstrate these dual strings leaking phenomenon impact on history matching. This paper covers dual strings leak diagnostic methodology which includes production logging tool evaluation, well’s production performance and recovery factor analysis. Possible factors that gives rise to the string’s leaks including material corrosion from high CO2 and sand production will also be discussed. We will demonstrate on how the leak phenomenon could be mimicked in the reservoir numerical model. Possible risks on future infill well identification if the leaks phenomenon is not incorporated will be also discussed. The dual strings leaks diagnosis and application in numerical simulation is illustrated on a case study of Field "D", a multilayered sandstone reservoir in Malaysia of almost 3 decades of production. This proven leak identification and reservoir model history matching methodology has been replicated for all the fault blocks across the field. It potentially unlocks more than 100 MMSTB of additional oil recovery by drilling more oil producers and water injectors in future drilling campaigns.


Energies ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 1055
Author(s):  
Qian Sun ◽  
William Ampomah ◽  
Junyu You ◽  
Martha Cather ◽  
Robert Balch

Machine-learning technologies have exhibited robust competences in solving many petroleum engineering problems. The accurate predictivity and fast computational speed enable a large volume of time-consuming engineering processes such as history-matching and field development optimization. The Southwest Regional Partnership on Carbon Sequestration (SWP) project desires rigorous history-matching and multi-objective optimization processes, which fits the superiorities of the machine-learning approaches. Although the machine-learning proxy models are trained and validated before imposing to solve practical problems, the error margin would essentially introduce uncertainties to the results. In this paper, a hybrid numerical machine-learning workflow solving various optimization problems is presented. By coupling the expert machine-learning proxies with a global optimizer, the workflow successfully solves the history-matching and CO2 water alternative gas (WAG) design problem with low computational overheads. The history-matching work considers the heterogeneities of multiphase relative characteristics, and the CO2-WAG injection design takes multiple techno-economic objective functions into accounts. This work trained an expert response surface, a support vector machine, and a multi-layer neural network as proxy models to effectively learn the high-dimensional nonlinear data structure. The proposed workflow suggests revisiting the high-fidelity numerical simulator for validation purposes. The experience gained from this work would provide valuable guiding insights to similar CO2 enhanced oil recovery (EOR) projects.


Energies ◽  
2021 ◽  
Vol 14 (11) ◽  
pp. 3137
Author(s):  
Amine Tadjer ◽  
Reider B. Bratvold ◽  
Remus G. Hanea

Production forecasting is the basis for decision making in the oil and gas industry, and can be quite challenging, especially in terms of complex geological modeling of the subsurface. To help solve this problem, assisted history matching built on ensemble-based analysis such as the ensemble smoother and ensemble Kalman filter is useful in estimating models that preserve geological realism and have predictive capabilities. These methods tend, however, to be computationally demanding, as they require a large ensemble size for stable convergence. In this paper, we propose a novel method of uncertainty quantification and reservoir model calibration with much-reduced computation time. This approach is based on a sequential combination of nonlinear dimensionality reduction techniques: t-distributed stochastic neighbor embedding or the Gaussian process latent variable model and clustering K-means, along with the data assimilation method ensemble smoother with multiple data assimilation. The cluster analysis with t-distributed stochastic neighbor embedding and Gaussian process latent variable model is used to reduce the number of initial geostatistical realizations and select a set of optimal reservoir models that have similar production performance to the reference model. We then apply ensemble smoother with multiple data assimilation for providing reliable assimilation results. Experimental results based on the Brugge field case data verify the efficiency of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document