Adaptive Multiscale Streamline Simulation and Inversion for High-Resolution Geomodels

SPE Journal ◽  
2008 ◽  
Vol 13 (01) ◽  
pp. 99-111 ◽  
Author(s):  
Vegard R. Stenerud ◽  
Vegard Kippe ◽  
Knut-Andreas Lie ◽  
Akhil Datta-Gupta

Summary A particularly efficient reservoir simulator can be obtained by combining a recent multiscale mixed finite-element flow solver with a streamline method for computing fluid transport. This multiscale-streamline method has shown to be a promising approach for fast flow simulations on high-resolution geologic models with multimillion grid cells. The multiscale method solves the pressure equation on a coarse grid while preserving important fine-scale details in the velocity field. Fine-scale heterogeneity is accounted for through a set of generalized, heterogeneous basis functions that are computed numerically by solving local flow problems. When included in the coarse-grid equations, the basis functions ensure that the global equations are consistent with the local properties of the underlying differential operators. The multiscale method offers a substantial gain in computation speed, without significant loss of accuracy, when basis functions are updated infrequently throughout a dynamic simulation. In this paper, we propose to combine the multiscale-streamline method with a recent "generalized travel-time inversion" method to derive a fast and robust method for history matching high-resolution geocellular models. A key point in the new method is the use of sensitivities that are calculated analytically along streamlines with little computational overhead. The sensitivities are used in the travel-time inversion formulation to give a robust quasilinear method that typically converges in a few iterations and generally avoids much of the time-consuming trial-and-error seen in manual history matching. Moreover, the sensitivities are used to enforce basis functions to be adaptively updated only in areas with relatively large sensitivity to the production response. The sensitivity-based adaptive approach allows us to selectively update only a fraction of the total number of basis functions, which gives substantial savings in computation time for the forward flow simulations. We demonstrate the power and utility of our approach using a simple 2D model and a highly detailed 3D geomodel. The 3D simulation model consists of more than 1,000,000 cells with 69 producing wells. Using our proposed approach, history matching over a period of 7 years is accomplished in less than 20 minutes on an ordinary workstation PC. Introduction It is well known that geomodels derived from static data only—such as geological, seismic, well-log, and core data—often fail to reproduce the production history. Reconciling geomodels to the dynamic response of the reservoir is critical for building reliable reservoir models. In the past few years, there have been significant developments in the area of dynamic data integration through the use of inverse modeling. Streamline methods have shown great promise in this regard (Vasco et al. 1999; Wang and Kovscek 2000; Milliken et al. 2001; He et al. 2002; Al-Harbi et al. 2005; Cheng et al. 2006). Streamline-based methods have the advantages that they are highly efficient "forward" simulators and allow production-response sensitivities to be computed analytically using a single flow simulation (Vasco et al. 1999; He et al. 2002; Al-Harbi et al. 2005; Cheng et al. 2006). Sensitivities describe the change in production responses caused by small perturbations in reservoir properties such as porosity and permeability and are a vital part of many methods for integrating dynamic data. Even though streamline simulators provide fast forward simulation compared with a full finite-difference simulation in 3D, the forward simulation is still the most time-consuming part of the history-matching process. A streamline simulation consists of two steps that are repeated:solution of a 3D pressure equation to compute flow velocities; andsolution of 1D transport equations for evolving fluid compositions along representative sets of streamlines, followed by a mapping back to the underlying pressure grid. The first step is referred to as the "pressure step" and is often the most time-consuming. Consequently, history matching and flow simulation are usually performed on upscaled simulation models, which imposes the need for a subsequent downscaling if the dynamic data are to be integrated in the geomodel. Upscaling and downscaling may result in loss of important fine-scale information.

2021 ◽  
Author(s):  
Mokhles Mezghani ◽  
Mustafa AlIbrahim ◽  
Majdi Baddourah

Abstract Reservoir simulation is a key tool for predicting the dynamic behavior of the reservoir and optimizing its development. Fine scale CPU demanding simulation grids are necessary to improve the accuracy of the simulation results. We propose a hybrid modeling approach to minimize the weight of the full physics model by dynamically building and updating an artificial intelligence (AI) based model. The AI model can be used to quickly mimic the full physics (FP) model. The methodology that we propose consists of starting with running the FP model, an associated AI model is systematically updated using the newly performed FP runs. Once the mismatch between the two models is below a predefined cutoff the FP model is switch off and only the AI model is used. The FP model is switched on at the end of the exercise either to confirm the AI model decision and stop the study or to reject this decision (high mismatch between FP and AI model) and upgrade the AI model. The proposed workflow was applied to a synthetic reservoir model, where the objective is to match the average reservoir pressure. For this study, to better account for reservoir heterogeneity, fine scale simulation grid (approximately 50 million cells) is necessary to improve the accuracy of the reservoir simulation results. Reservoir simulation using FP model and 1024 CPUs requires approximately 14 hours. During this history matching exercise, six parameters have been selected to be part of the optimization loop. Therefore, a Latin Hypercube Sampling (LHS) using seven FP runs is used to initiate the hybrid approach and build the first AI model. During history matching, only the AI model is used. At the convergence of the optimization loop, a final FP model run is performed either to confirm the convergence for the FP model or to re iterate the same approach starting from the LHS around the converged solution. The following AI model will be updated using all the FP simulations done in the study. This approach allows the achievement of the history matching with very acceptable quality match, however with much less computational resources and CPU time. CPU intensive, multimillion-cell simulation models are commonly utilized in reservoir development. Completing a reservoir study in acceptable timeframe is a real challenge for such a situation. The development of new concepts/techniques is a real need to successfully complete a reservoir study. The hybrid approach that we are proposing is showing very promising results to handle such a challenge.


2015 ◽  
Vol 138 (1) ◽  
Author(s):  
Jihoon Park ◽  
Jeongwoo Jin ◽  
Jonggeun Choe

For decision making, it is crucial to have proper reservoir characterization and uncertainty assessment of reservoir performances. Since initial models constructed with limited data have high uncertainty, it is essential to integrate both static and dynamic data for reliable future predictions. Uncertainty quantification is computationally demanding because it requires a lot of iterative forward simulations and optimizations in a single history matching, and multiple realizations of reservoir models should be computed. In this paper, a methodology is proposed to rapidly quantify uncertainties by combining streamline-based inversion and distance-based clustering. A distance between each reservoir model is defined as the norm of differences of generalized travel time (GTT) vectors. Then, reservoir models are grouped according to the distances and representative models are selected from each group. Inversions are performed on the representative models instead of using all models. We use generalized travel time inversion (GTTI) for the integration of dynamic data to overcome high nonlinearity and take advantage of computational efficiency. It is verified that the proposed method gathers models with both similar dynamic responses and permeability distribution. It also assesses the uncertainty of reservoir performances reliably, while reducing the amount of calculations significantly by using the representative models.


2006 ◽  
Vol 9 (01) ◽  
pp. 15-23 ◽  
Author(s):  
Ajay K. Samantray ◽  
Qasem M. Dashti ◽  
Eddie Ma ◽  
Pradeep S. Kumar

Summary Nine multimillion-cell geostatistical earth models of the Marrat reservoir in Magwa field, Kuwait, were upscaled for streamline (SL) screening and finite-difference (FD) flow simulation. The scaleup strategy consisted of (1) maintaining square areal blocks over the oil column, (2) upscaling to the largest areal-block size (200 x 200 m) compatible with 125-acre well spacing, (3) upscaling to less than 1 million gridblocks for SL screening, and (4) upscaling to less than 250,000 gridblocks for FD flow simulation. Chevron's in-house scaleup software program, SCP, was used for scaleup. SCP employs a single-phase flow-based process for upscaling nonuniform 3D grids. Several iterations of scaleup were made to optimize the result. Sensitivity tests suggest that a uniform scaled-up grid overestimates breakthrough time compared to the fine model, and the post-breakthrough fractional flow also remains higher than in the fine model. However, preserving high-flow-rate layers in a nonuniform scaled-up model was key to matching the front-tracking behavior of the fine model. The scaled-up model was coarsened in areas of low average layer flow because less refinement is needed in these areas to still match the flow behavior of the fine model. The final ratio of pre- to post-scaleup grid sizes was 6:1 for SL and 21:1 for FD simulation. Several checks were made to verify the accuracy of scaleup. These include comparison of pre- and post-scaleup fractional-flow curves in terms of breakthrough time and post-breakthrough curve shape, cross-sectional permeabilities, global porosity histograms, porosity/permeability clouds, visual comparison of heterogeneity, and earth-model and scaled-up volumetrics. The scaled-up models were screened using the 3D SL technique. The results helped in bracketing the flow behavior of different earth models and evaluating the model that better tracks the historical performance data. By initiating the full-field history-matching process with the geologic model that most closely matched the field performance in the screening stage, the amount of history matching was minimized, and the time and effort required were reduced. The application of unrealistic changes to the geologic model to match production history was also avoided. The study suggests that single realizations of "best-guess" geostatistical models are not guaranteed to offer the best history match and performance prediction. Multiple earth models must be built to capture the range of heterogeneity and assess its impact on reservoir flow behavior. Introduction The widespread use of geostatistics during the last decade has offered us both opportunities and challenges. It has been possible to capture vertical and areal heterogeneities measured by well logs and inferred by the depositional environments in a very fine scale with 0.1- to 0.3-m vertical and 20- to 100-m areal resolution (Hobbet et al. 2000; Dashti et al. 2002; Aly et al. 1999; Haldorsen and Damsleth 1990; Haldorsen and Damsleth 1993). It also has been possible to generate a large number of realizations to assess the uncertainty in reservoir descriptions and performance predictions (Sharif and MacDonald 2001). These multiple realizations variously account for uncertainties in structure, stratigraphy, and petrophysical properties. Although impressive, the fine-scale geological models usually run into several millions of cells, and current computing technology limits us from simulating such multimillion-cell models on practical time scales. This requires a translation of the detailed grids to a coarser, computationally manageable level without compromising the gross flow behavior of the original fine-scale model and the anticipated reservoir performance. This translation is commonly referred to as upscaling (Christie 1996; Durlofsky et al. 1996; Chawathe and Taggart 2001; Ates et al. 2003). The other challenge is to quantify the uncertainty while keeping the number of realizations manageable. This requires identifying uncertainties with the greatest potential impact and arriving at an optimal combination to capture the extremes. Further, these models require a screening and ranking process to assess their relative ability to track historical field performance and to help minimize the number of models that can be considered for comprehensive flow simulations (Milliken et al. 2001; Samier et al. 2002; Chakravarty et al. 2000; Lolomari et al. 2000; Albertão et al. 2001; Baker et al. 2001; Ates et al. 2003). In some situations, often a single realization of the best-guess geostatistical model is carried forward for conventional flow simulation and uncertainties are quantified with parametric techniques such as Monte Carlo evaluations (Hobbet et al. 2000; Dashti et al. 2002). Using the case study of this Middle Eastern carbonate reservoir, the paper describes the upscaling, uncertainty management, and SL screening process used to arrive at a single reference model that optimally combines the uncertainties and provides the best history match and performance forecast from full-field flow simulation. Fig. 1 presents the details of the workflow used.


Author(s):  
D. T. van Daalen ◽  
H. N. C. M. van der Heijden ◽  
R. H. Rietdijk ◽  
A. Stopin ◽  
J. C. M. Goudswaard ◽  
...  

SPE Journal ◽  
2017 ◽  
Vol 22 (04) ◽  
pp. 1261-1279 ◽  
Author(s):  
Shingo Watanabe ◽  
Jichao Han ◽  
Gill Hetz ◽  
Akhil Datta-Gupta ◽  
Michael J. King ◽  
...  

Summary We present an efficient history-matching technique that simultaneously integrates 4D repeat seismic surveys with well-production data. This approach is particularly well-suited for the calibration of the reservoir properties of high-resolution geologic models because the seismic data are areally dense but sparse in time, whereas the production data are finely sampled in time but spatially averaged. The joint history matching is performed by use of streamline-based sensitivities derived from either finite-difference or streamline-based flow simulation. For the most part, earlier approaches have focused on the role of saturation changes, but the effects of pressure have largely been ignored. Here, we present a streamline-based semianalytic approach for computing model-parameter sensitivities, accounting for both pressure and saturation effects. The novelty of the method lies in the semianalytic sensitivity computations, making it computationally efficient for high-resolution geologic models. The approach is implemented by use of a finite-difference simulator incorporating the detailed physics. Its efficacy is demonstrated by use of both synthetic and field applications. For both the synthetic and the field cases, the advantages of incorporating the time-lapse variations are clear, seen through the improved estimation of the permeability distribution, the pressure profile, the evolution of the fluid saturation, and the swept volumes.


SPE Journal ◽  
2002 ◽  
Vol 7 (02) ◽  
pp. 113-122 ◽  
Author(s):  
Zhan Wu ◽  
Akhil Datta-Gupta

Sign in / Sign up

Export Citation Format

Share Document