Modelling and Analysis on High-Fidelity Fine-Scale Reservoir Simulation in Mature Waterflooding Reservoir

Author(s):  
Wu Shuhong ◽  
Dong Jiangyan ◽  
Li hua ◽  
Li Qiaoyun ◽  
Wang Baohua ◽  
...  
2021 ◽  
Author(s):  
Mokhles Mezghani ◽  
Mustafa AlIbrahim ◽  
Majdi Baddourah

Abstract Reservoir simulation is a key tool for predicting the dynamic behavior of the reservoir and optimizing its development. Fine scale CPU demanding simulation grids are necessary to improve the accuracy of the simulation results. We propose a hybrid modeling approach to minimize the weight of the full physics model by dynamically building and updating an artificial intelligence (AI) based model. The AI model can be used to quickly mimic the full physics (FP) model. The methodology that we propose consists of starting with running the FP model, an associated AI model is systematically updated using the newly performed FP runs. Once the mismatch between the two models is below a predefined cutoff the FP model is switch off and only the AI model is used. The FP model is switched on at the end of the exercise either to confirm the AI model decision and stop the study or to reject this decision (high mismatch between FP and AI model) and upgrade the AI model. The proposed workflow was applied to a synthetic reservoir model, where the objective is to match the average reservoir pressure. For this study, to better account for reservoir heterogeneity, fine scale simulation grid (approximately 50 million cells) is necessary to improve the accuracy of the reservoir simulation results. Reservoir simulation using FP model and 1024 CPUs requires approximately 14 hours. During this history matching exercise, six parameters have been selected to be part of the optimization loop. Therefore, a Latin Hypercube Sampling (LHS) using seven FP runs is used to initiate the hybrid approach and build the first AI model. During history matching, only the AI model is used. At the convergence of the optimization loop, a final FP model run is performed either to confirm the convergence for the FP model or to re iterate the same approach starting from the LHS around the converged solution. The following AI model will be updated using all the FP simulations done in the study. This approach allows the achievement of the history matching with very acceptable quality match, however with much less computational resources and CPU time. CPU intensive, multimillion-cell simulation models are commonly utilized in reservoir development. Completing a reservoir study in acceptable timeframe is a real challenge for such a situation. The development of new concepts/techniques is a real need to successfully complete a reservoir study. The hybrid approach that we are proposing is showing very promising results to handle such a challenge.


2016 ◽  
Vol 18 (2-3) ◽  
pp. 93-102 ◽  
Author(s):  
Zheng Li ◽  
Shuhong Wu ◽  
Chen-Song Zhang ◽  
Jinchao Xu ◽  
Chunsheng Feng ◽  
...  

SPE Journal ◽  
2006 ◽  
Vol 11 (03) ◽  
pp. 317-327 ◽  
Author(s):  
Martin Mlacnik ◽  
Louis J. Durlofsky ◽  
Zoltan E. Heinemann

Summary A technique for the sequential generation of perpendicular-bisectional (PEBI) grids adapted to flow information is presented and applied. The procedure includes a fine-scale flow solution, the generation of an initial streamline-isopotential grid, grid optimization, and upscaling. The grid optimization is accomplished through application of a hybrid procedure with gradient and Laplacian smoothing steps, while the upscaling is based on a global-local procedure that makes use of the global solution used in the grid-determination step. The overall procedure is successfully applied to a complex channelized reservoir model involving changing well conditions. The gridding and upscaling procedures presented here may also be suitable for use with other types of structured or unstructured grid systems. Introduction Modern geological and geostatistical tools provide highly detailed descriptions of the spatial variation of reservoir properties, resulting in fine-grid models consisting of 107 to 108 gridblocks. As a consequence of this high level of detail, these models cannot be used directly in numerical reservoir simulators, but need to be coarsened significantly. Coarsening requires the averaging of rock parameters from the fine scale to the coarse scale. This process is referred to as upscaling. For simulation of flow in porous media, the upscaling of permeability is of particular interest. A large body of literature exists on this topic; for a comprehensive review of existing techniques, see Durlofsky (2005). To preserve as much of the geological information of the fine grid as possible, the grid coarsening should not be performed uniformly, but with more refinement in areas that are expected to have large impact on the flow, including structural features, such as faults. Although grid-generation techniques based on purely static, nonflow-based considerations have been shown to produce reasonable results(Garcia et al. 1992), the application of flow-based grids is often preferable. Flow-based grids require the solution of some type of fine-scale problem. They are then constructed by exploiting the information obtained from streamlines (and possibly isopotentials) either directly or indirectly. Depending on the type of grid used, points will be defined as cell vertices or nodes, resulting in either a corner-point geometry or point-distributed grid. Several gridding techniques for reservoir simulation have been introduced along these lines, as we now discuss.


SPE Journal ◽  
2020 ◽  
Vol 25 (04) ◽  
pp. 1981-1999 ◽  
Author(s):  
Victor S. Rios ◽  
Luiz O. S. Santos ◽  
Denis J. Schiozer

Summary Field-scale representation of highly heterogeneous reservoirs remains a challenge in numerical reservoir simulation. In such reservoirs, detailed geological models are important to properly represent key heterogeneities. However, high computational costs and long simulation run times make these detailed models unfeasible to use in dynamic evaluations. Therefore, the scaling up of geological models is a key step in reservoir-engineering studies to reduce computational time. Scaling up must be carefully performed to maintain integrity; both truncation errors and the smoothing of subgrid heterogeneities can cause significant errors. This work evaluates the latter—the effect of averaging small-scale heterogeneities in the upscaling process—and proposes a new upscaling technique to overcome the associated limitations. The technique is dependent on splitting the porous media into two levels guided by flow- and storage-capacity analysis and the Lorenz coefficient (LC), both calculated with static properties (permeability and porosity) from a fine-scale reference model. This technique allows the adaptation of a fine highly heterogeneous geological model to a coarse-scale simulation model in a dual-porosity/dual-permeability (DP/DP) approach and represents the main reservoir heterogeneities and possible preferential paths. The new upscaling technique is applied to different reservoir-simulation models with water injection and immiscible gas injection as recovery methods. In deterministic and probabilistic studies, we show that the resulting coarse-scale dual-permeability models are more accurate and can better reproduce the fine-scale results in different upscaling ratios (URs), without using any simulation results of the reference fine-scale simulation models, as some of the current alternative upscaling methods do.


SPE Journal ◽  
2008 ◽  
Vol 13 (01) ◽  
pp. 68-76 ◽  
Author(s):  
Pinggang Zhang ◽  
Gillian E. Pickup ◽  
Michael A. Christie

Summary Geologists often generate highly heterogeneous descriptions of reservoirs, containing complex structures which are likely to give rise to very tortuous flow paths. However, these models contain too many grid cells for multiphase flow simulation, and the number of cells must be reduced by upscaling for reservoir simulation. Conventional upscaling methods often have difficulty in the representation of tortuous flow paths, mainly because of the inappropriate assumptions concerning the boundary conditions. An accurate and practical upscaling method is therefore required to preserve the flow features caused by highly heterogeneous fine scale geological description. In this paper, the problems encountered in routinely used upscaling approaches are outlined, and a more accurate and practical way of performing upscaling is proposed. The new upscaling method, Well Drive Upscaling (WDU), employs the wells and the actual reservoir boundary conditions (e.g., faults and physical boundaries of the geological model). The main advantage of this method is that the dominant flow paths can be preserved, and thus the geological knowledge can be assimilated appropriately. The new method has firstly been applied to a synthetic model with a tortuous channel, and is shown to have significant improvement over the traditional approach. The sensitivity study on the scale-up factor using a benchmark model shows the advantage of the method with various scale-up factors. The method was then applied to a model of a field in the central North Sea, which involves three-phase flow. In the cases studied, the WDU method produced a comparable result to the dynamic Pore Volume Weighted approach, which involves running the fine grid simulation and computing appropriate relative permeabilities and interblock transmissibilities. The new method makes the upscaling process practical, and our tests show it to be more accurate than traditional methods. Introduction The heterogeneity observed in a field is generally high and the geological structures therein can be complex. From a geological point of view, it would be ideal to represent each facies boundary, both vertically and horizontally, by a gridblock boundary (Mallet 1997; Deutsch and Tran 2002). Also, if distinct layering exists within a genetic unit, a further split into subunits is also desirable. In practice, reservoir models are usually created at the scale of meters or less vertically and 100 meters or less areally [and each block itself may have involved small-scale upscaling (Pickup et al. 2005)]. In many cases, detailed reservoir modeling for a highly heterogeneous reservoir may result in a large number of grid cells (e.g., 106 grid cells or more). This large number of grid cells prohibits direct simulation of the reservoir, especially for a very heterogeneous reservoir model. This is because, apart from the limitation of computational power, the high level of heterogeneity often makes it difficult to obtain a converged solution. The problem becomes more severe when simulations involve three-phase flow. In order to perform reservoir simulation on a highly heterogeneous geological model within a reasonable time frame, we have to apply appropriate upscaling techniques to reduce the number of grid cells so as to speed up the reservoir simulation and thus field development planning process. Although a number of upscaling methods have been developed in the past a few decades (Pickup et al. 2005; Christie 1996, 2001), they are often not satisfactory and have been discussed in a number of critical reviews (Barker and Thibeau 1997; Farmer 2002). The main conflict in the application of the current upscaling techniques lies in the balance of the accuracy and practicality of the methods. There are two main problems that cause the conflict. The first is the problem of using inappropriate boundary conditions in single-phase upscaling, which is likely to reduce the accuracy, and the second problem is the impracticality of the dynamic two-phase upscaling which should (in theory) be more accurate. Details of the these methods have been discussed in a number of reviews on upscaling (Christie 1996, 2001; Barker and Thibeau 1997; Farmer 2002; Renard and de Marsily 1997), so a complete review of upscaling methods will not be presented here. However, we outline one of the commonly used methods: the pressure solution method for upscaling single-phase flow. In this method, a single-phase pressure solve is carried out in each coarse cell in turn, and Darcy's law is used to calculate the effective permeability tensor (Christie 1996). In order to solve the pressure equation, boundary conditions must be applied to each cell. (This is referred to as the local upscaling method.) A typical example is the no-flow, or constant pressure boundary condition, where the pressure is fixed at either end of the region of a coarse block, and no flow is allowed through the sides. Other boundary conditions include linear pressure and periodic boundary conditions (Farmer 2002). Such boundary conditions, however, may differ significantly from the actual boundary conditions within a heterogeneous fine-scale model (Chen et al. 2003; Zhang 2006). A highly heterogeneous reservoir model often produces tortuous flow paths, and it is difficult to generalize a flow pattern on the boundaries of a coarse block and apply to all the coarse blocks in a reservoir model. The flow paths for a coarse model may be completely different from the original fine-scale geological model response when inappropriate boundary conditions are applied (i.e., the effect of geological structure may be lost). A multiphase flow simulation from such a coarse model will not honor the small-scale geological structure either, even if we ignore the error caused by multiphase flow effects in the upscaling process.


SPE Journal ◽  
2014 ◽  
Vol 19 (05) ◽  
pp. 832-844 ◽  
Author(s):  
Faruk O. Alpak ◽  
Frans van der Vlugt

Summary A set of algorithms, called the shale-drape function (SDF), has been developed that incorporates bounding shales (shale drapes) for channels, channel belts (also known as meander belts), lobes and lobe complexes in 3D geologic models used for reservoir simulation. Shale drapes can have a significant impact on the recovery efficiency of clastic reservoirs. Therefore, they need to be modeled when present in significant quantities (in general, more than 50 to 70% in terms of areal coverage). The function incorporates shale drapes into a geologic model with an iterative process that creates shale layers over the entire surface of reservoir objects and then places ellipsoid-shaped holes into shale surfaces until a desired areal coverage is reached. The workflow for application recommends to grid the simulation model along the boundaries of stratigraphic objects, thereby ensuring that the shales can be realistically represented in the fine-scale geomodel and preserved in the post-upscaling simulation model.


1999 ◽  
Vol 2 (04) ◽  
pp. 368-376 ◽  
Author(s):  
H.A. Tchelepi ◽  
L.J. Durlofsky ◽  
W.H. Chen ◽  
A. Bernath ◽  
M.C.H. Chien

Summary Scale up and parallel reservoir simulation represent two distinct approaches for the simulation of highly detailed geological or geostatistical reservoir models. In this paper, we discuss the complementary use of these two approaches for practical, large scale reservoir simulation problems. We first review our recently developed approaches for upscaling and parallel reservoir simulation. Then, several practical large scale modeling problems, which include simulations of multiple realizations of a waterflood pattern element, a four well sector model, and a large, 130 well segment model, are addressed. It is shown that, for the pattern waterflood model, significantly coarsened models provide reliable results for many aspects of the reservoir flow. However, the simulation of at least some of the fine scale geostatistical realizations, accomplished using our parallel reservoir simulation technology, is useful in determining the appropriate level of scale up. For models with a large number of wells, the upscaled models can lose accuracy as the grid is coarsened. In these cases, although field-wide performance can still be predicted with reasonable accuracy, parallel reservoir simulation is required to maintain sufficiently refined models capable of accurate flow results on a well by well basis. Finally, some issues concerning the use of highly detailed models in practical simulation studies are discussed. Introduction Reservoir description and flow modeling capabilities continue to benefit from advances in computing hardware and software technologies. However, the level of detail typically included in reservoir characterizations continues to exceed the capabilities of traditional reservoir flow simulators by a significant margin. This resolution gap, due to the much larger computational requirements of flow simulation, has driven the development of two specific technologies: scale up and parallel reservoir simulation. These two technologies represent very distinct approaches—scale up methods attempt to coarsen the simulation model to fit the hardware, while parallel reservoir simulation technology attempts to extend computing capabilities to accommodate the detailed model. The purpose of this paper is to present and discuss ways in which to utilize these two technologies in a complementary fashion for the solution of practical large scale reservoir simulation problems. Toward this end, we first discuss our previously developed capabilities for scale up1,2 and parallel reservoir simulation.3 Next, the two technologies are applied to several reservoirs represented via highly detailed (i.e., on the order of 1 million cells) geostatistical models. Various production scenarios are considered. It will be shown how the direct simulation of the highly detailed models (using parallel reservoir simulation technology on an IBM SP) can be used to assess and guide the scale up procedure and to establish the appropriate level of coarsening allowable. We will show that, once this level is established, upscaled models can be used to evaluate multiple geostatistical realizations. We additionally apply the detailed simulation results to develop general guidelines for the degree of scale up allowable for various types of simulation models; e.g., pattern, sector and large segment models. Our general conclusion is that our scale up technology, as currently used, is quite reliable when sufficient refinement is maintained in the coarsened model. We show that when many wells are to be simulated, the upscaled models can begin to lose accuracy, particularly when well by well production is considered. This is due in part to the fact that, in the coarse models, wells are separated by very few grid blocks, and degradation in accuracy results. There have been many previous studies directed toward the development of parallel reservoir simulation technology and many studies aimed at the development of scale up techniques. To our knowledge, this is the first effort that considers the complementary use of both. Here we will very briefly review the recent literature on both parallel reservoir simulation and upscaling techniques. For more complete discussions of previous work, refer to Refs. 1-3. Traditional techniques for upscaling rely on the use of pseudorelative permeabilities. Although often applied in practice, the use of pseudorelative permeabilities can lead to inaccuracies in some cases.4,5 This is largely due to the high degree of process dependency inherent in the pseudorelative permeability approach; i.e., pseudorelative permeability curves are really only appropriate for the conditions for which they are generated. The deficiencies in the traditional pseudorelative permeability methodology have motivated work in several areas. This includes the generation of more robust pseudorelative permeabilities,6,7 the use of higher moments of the fine scale variables,5 and the nonuniform coarsening approach applied in this study (discussed in Nonuniform Coarsening Method for Scale Up). Generalizations of the nonuniform coarsening approach described in Refs. 1 and 2 have also been presented.8,9 Parallel reservoir simulation is an area of active research. Recent publications emphasize the development of scalable algorithms designed to run efficiently on a variety of parallel platforms.10–13 Most recent implementations involve distributed memory platforms such as a cluster of workstations. The typical size of a simulation model run in parallel is on the order of 1 (or a few) million grid blocks, though results for a 16.5 million cell model have been reported.11 Most parallel implementations are based on message passing techniques such as the message passing interface standard (MPI). Several of the parallel simulation algorithms, including our own, are based on a multilevel domain decomposition approach. This entails communication between domains in a manner analogous to that used in standard domain decomposition approaches.


2021 ◽  
Author(s):  
Ryan Santoso ◽  
Xupeng He ◽  
Marwa Alsinan ◽  
Ruben Figueroa Hernandez ◽  
Hyung Kwak ◽  
...  

Abstract History matching is a critical step within the reservoir management process to synchronize the simulation model with the production data. The history-matched model can be used for planning optimum field development and performing optimization and uncertainty quantifications. We present a novel history matching workflow based on a Bayesian framework that accommodates subsurface uncertainties. Our workflow involves three different model resolutions within the Bayesian framework: 1) a coarse low-fidelity model to update the prior range, 2) a fine low-fidelity model to represent the high-fidelity model, and 3) a high-fidelity model to re-construct the real response. The low-fidelity model is constructed by a multivariate polynomial function, while the high-fidelity model is based on the reservoir simulation model. We firstly develop a coarse low-fidelity model using a two-level Design of Experiment (DoE), which aims to provide a better prior. We secondly use Latin Hypercube Sampling (LHS) to construct the fine low-fidelity model to be deployed in the Bayesian runs, where we use the Metropolis-Hastings algorithm. Finally, the posterior is fed into the high-fidelity model to evaluate the matching quality. This work demonstrates the importance of including uncertainties in history matching. Bayesian provides a robust framework to allow uncertainty quantification within the reservoir history matching. Under uniform prior, the convergence of the Bayesian is very sensitive to the parameter ranges. When the solution is far from the mean of the parameter ranges, the Bayesian introduces bios and deviates from the observed data. Our results show that updating the prior from the coarse low-fidelity model accelerates the Bayesian convergence and improves the matching convergence. Bayesian requires a huge number of runs to produce an accurate posterior. Running the high-fidelity model multiple times is expensive. Our workflow tackles this problem by deploying a fine low-fidelity model to represent the high-fidelity model in the main runs. This fine low-fidelity model is fast to run, while it honors the physics and accuracy of the high-fidelity model. We also use ANOVA sensitivity analysis to measure the importance of each parameter. The ranking gives awareness to the significant ones that may contribute to the matching accuracy. We demonstrate our workflow for a geothermal reservoir with static and operational uncertainties. Our workflow produces accurate matching of thermal recovery factor and produced-enthalpy rate with physically-consistent posteriors. We present a novel workflow to account for uncertainty in reservoir history matching involving multi-resolution interaction. The proposed method is generic and can be readily applied within existing history-matching workflows in reservoir simulation.


Sign in / Sign up

Export Citation Format

Share Document