Upscaling and 3D Streamline Screening of Several Multimillion-Cell Earth Models for Flow Simulation

2006 ◽  
Vol 9 (01) ◽  
pp. 15-23 ◽  
Author(s):  
Ajay K. Samantray ◽  
Qasem M. Dashti ◽  
Eddie Ma ◽  
Pradeep S. Kumar

Summary Nine multimillion-cell geostatistical earth models of the Marrat reservoir in Magwa field, Kuwait, were upscaled for streamline (SL) screening and finite-difference (FD) flow simulation. The scaleup strategy consisted of (1) maintaining square areal blocks over the oil column, (2) upscaling to the largest areal-block size (200 x 200 m) compatible with 125-acre well spacing, (3) upscaling to less than 1 million gridblocks for SL screening, and (4) upscaling to less than 250,000 gridblocks for FD flow simulation. Chevron's in-house scaleup software program, SCP, was used for scaleup. SCP employs a single-phase flow-based process for upscaling nonuniform 3D grids. Several iterations of scaleup were made to optimize the result. Sensitivity tests suggest that a uniform scaled-up grid overestimates breakthrough time compared to the fine model, and the post-breakthrough fractional flow also remains higher than in the fine model. However, preserving high-flow-rate layers in a nonuniform scaled-up model was key to matching the front-tracking behavior of the fine model. The scaled-up model was coarsened in areas of low average layer flow because less refinement is needed in these areas to still match the flow behavior of the fine model. The final ratio of pre- to post-scaleup grid sizes was 6:1 for SL and 21:1 for FD simulation. Several checks were made to verify the accuracy of scaleup. These include comparison of pre- and post-scaleup fractional-flow curves in terms of breakthrough time and post-breakthrough curve shape, cross-sectional permeabilities, global porosity histograms, porosity/permeability clouds, visual comparison of heterogeneity, and earth-model and scaled-up volumetrics. The scaled-up models were screened using the 3D SL technique. The results helped in bracketing the flow behavior of different earth models and evaluating the model that better tracks the historical performance data. By initiating the full-field history-matching process with the geologic model that most closely matched the field performance in the screening stage, the amount of history matching was minimized, and the time and effort required were reduced. The application of unrealistic changes to the geologic model to match production history was also avoided. The study suggests that single realizations of "best-guess" geostatistical models are not guaranteed to offer the best history match and performance prediction. Multiple earth models must be built to capture the range of heterogeneity and assess its impact on reservoir flow behavior. Introduction The widespread use of geostatistics during the last decade has offered us both opportunities and challenges. It has been possible to capture vertical and areal heterogeneities measured by well logs and inferred by the depositional environments in a very fine scale with 0.1- to 0.3-m vertical and 20- to 100-m areal resolution (Hobbet et al. 2000; Dashti et al. 2002; Aly et al. 1999; Haldorsen and Damsleth 1990; Haldorsen and Damsleth 1993). It also has been possible to generate a large number of realizations to assess the uncertainty in reservoir descriptions and performance predictions (Sharif and MacDonald 2001). These multiple realizations variously account for uncertainties in structure, stratigraphy, and petrophysical properties. Although impressive, the fine-scale geological models usually run into several millions of cells, and current computing technology limits us from simulating such multimillion-cell models on practical time scales. This requires a translation of the detailed grids to a coarser, computationally manageable level without compromising the gross flow behavior of the original fine-scale model and the anticipated reservoir performance. This translation is commonly referred to as upscaling (Christie 1996; Durlofsky et al. 1996; Chawathe and Taggart 2001; Ates et al. 2003). The other challenge is to quantify the uncertainty while keeping the number of realizations manageable. This requires identifying uncertainties with the greatest potential impact and arriving at an optimal combination to capture the extremes. Further, these models require a screening and ranking process to assess their relative ability to track historical field performance and to help minimize the number of models that can be considered for comprehensive flow simulations (Milliken et al. 2001; Samier et al. 2002; Chakravarty et al. 2000; Lolomari et al. 2000; Albertão et al. 2001; Baker et al. 2001; Ates et al. 2003). In some situations, often a single realization of the best-guess geostatistical model is carried forward for conventional flow simulation and uncertainties are quantified with parametric techniques such as Monte Carlo evaluations (Hobbet et al. 2000; Dashti et al. 2002). Using the case study of this Middle Eastern carbonate reservoir, the paper describes the upscaling, uncertainty management, and SL screening process used to arrive at a single reference model that optimally combines the uncertainties and provides the best history match and performance forecast from full-field flow simulation. Fig. 1 presents the details of the workflow used.

2021 ◽  
Author(s):  
Mokhles Mezghani ◽  
Mustafa AlIbrahim ◽  
Majdi Baddourah

Abstract Reservoir simulation is a key tool for predicting the dynamic behavior of the reservoir and optimizing its development. Fine scale CPU demanding simulation grids are necessary to improve the accuracy of the simulation results. We propose a hybrid modeling approach to minimize the weight of the full physics model by dynamically building and updating an artificial intelligence (AI) based model. The AI model can be used to quickly mimic the full physics (FP) model. The methodology that we propose consists of starting with running the FP model, an associated AI model is systematically updated using the newly performed FP runs. Once the mismatch between the two models is below a predefined cutoff the FP model is switch off and only the AI model is used. The FP model is switched on at the end of the exercise either to confirm the AI model decision and stop the study or to reject this decision (high mismatch between FP and AI model) and upgrade the AI model. The proposed workflow was applied to a synthetic reservoir model, where the objective is to match the average reservoir pressure. For this study, to better account for reservoir heterogeneity, fine scale simulation grid (approximately 50 million cells) is necessary to improve the accuracy of the reservoir simulation results. Reservoir simulation using FP model and 1024 CPUs requires approximately 14 hours. During this history matching exercise, six parameters have been selected to be part of the optimization loop. Therefore, a Latin Hypercube Sampling (LHS) using seven FP runs is used to initiate the hybrid approach and build the first AI model. During history matching, only the AI model is used. At the convergence of the optimization loop, a final FP model run is performed either to confirm the convergence for the FP model or to re iterate the same approach starting from the LHS around the converged solution. The following AI model will be updated using all the FP simulations done in the study. This approach allows the achievement of the history matching with very acceptable quality match, however with much less computational resources and CPU time. CPU intensive, multimillion-cell simulation models are commonly utilized in reservoir development. Completing a reservoir study in acceptable timeframe is a real challenge for such a situation. The development of new concepts/techniques is a real need to successfully complete a reservoir study. The hybrid approach that we are proposing is showing very promising results to handle such a challenge.


Geofluids ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-18
Author(s):  
Jaeyoung Park ◽  
Candra Janova

This paper introduces a flow simulation-based reservoir modeling study of a two-well pad with long production history and identical completion parameters in the Midland Basin. The study includes building geologic model, history matching, well performance prediction, and finding optimum lateral well spacing in terms of oil volume and economic metrics. The reservoir model was constructed based on a geologic model, integrating well logs, and core data near the target area. Next, a sensitivity analysis was performed on the reservoir simulation model to better understand influential parameters on simulation results. The following history matching was conducted with the satisfactory quality, less than 10% of global error, and after the model calibration ranges of history matching parameters have substantially reduced. The population-based history matching algorithm provides the ensemble of the history-matched model, and the top 50 history-matched models were selected to predict the range of Estimate Ultimate Recovery (EUR), showing that P50 of oil EUR is within the acceptable range of the deterministic EUR estimates. With the best history-matched model, we investigated lateral well spacing sensitivity of the pad in terms of the maximum recovery volume and economic benefit. The results show that, given the current completion design, the well spacing tighter than the current practice in the area is less effective regarding the oil volume recovery. However, economic metrics suggest that the additional monetary value can be realized with 150% of current development assumption. The presented workflow provides a systematic approach to find the optimum lateral well spacing in terms of volume and economic metrics per one section given economic assumptions, and the workflow can be readily repeated to evaluate spacing optimization in other acreage.


SPE Journal ◽  
2008 ◽  
Vol 13 (01) ◽  
pp. 99-111 ◽  
Author(s):  
Vegard R. Stenerud ◽  
Vegard Kippe ◽  
Knut-Andreas Lie ◽  
Akhil Datta-Gupta

Summary A particularly efficient reservoir simulator can be obtained by combining a recent multiscale mixed finite-element flow solver with a streamline method for computing fluid transport. This multiscale-streamline method has shown to be a promising approach for fast flow simulations on high-resolution geologic models with multimillion grid cells. The multiscale method solves the pressure equation on a coarse grid while preserving important fine-scale details in the velocity field. Fine-scale heterogeneity is accounted for through a set of generalized, heterogeneous basis functions that are computed numerically by solving local flow problems. When included in the coarse-grid equations, the basis functions ensure that the global equations are consistent with the local properties of the underlying differential operators. The multiscale method offers a substantial gain in computation speed, without significant loss of accuracy, when basis functions are updated infrequently throughout a dynamic simulation. In this paper, we propose to combine the multiscale-streamline method with a recent "generalized travel-time inversion" method to derive a fast and robust method for history matching high-resolution geocellular models. A key point in the new method is the use of sensitivities that are calculated analytically along streamlines with little computational overhead. The sensitivities are used in the travel-time inversion formulation to give a robust quasilinear method that typically converges in a few iterations and generally avoids much of the time-consuming trial-and-error seen in manual history matching. Moreover, the sensitivities are used to enforce basis functions to be adaptively updated only in areas with relatively large sensitivity to the production response. The sensitivity-based adaptive approach allows us to selectively update only a fraction of the total number of basis functions, which gives substantial savings in computation time for the forward flow simulations. We demonstrate the power and utility of our approach using a simple 2D model and a highly detailed 3D geomodel. The 3D simulation model consists of more than 1,000,000 cells with 69 producing wells. Using our proposed approach, history matching over a period of 7 years is accomplished in less than 20 minutes on an ordinary workstation PC. Introduction It is well known that geomodels derived from static data only—such as geological, seismic, well-log, and core data—often fail to reproduce the production history. Reconciling geomodels to the dynamic response of the reservoir is critical for building reliable reservoir models. In the past few years, there have been significant developments in the area of dynamic data integration through the use of inverse modeling. Streamline methods have shown great promise in this regard (Vasco et al. 1999; Wang and Kovscek 2000; Milliken et al. 2001; He et al. 2002; Al-Harbi et al. 2005; Cheng et al. 2006). Streamline-based methods have the advantages that they are highly efficient "forward" simulators and allow production-response sensitivities to be computed analytically using a single flow simulation (Vasco et al. 1999; He et al. 2002; Al-Harbi et al. 2005; Cheng et al. 2006). Sensitivities describe the change in production responses caused by small perturbations in reservoir properties such as porosity and permeability and are a vital part of many methods for integrating dynamic data. Even though streamline simulators provide fast forward simulation compared with a full finite-difference simulation in 3D, the forward simulation is still the most time-consuming part of the history-matching process. A streamline simulation consists of two steps that are repeated:solution of a 3D pressure equation to compute flow velocities; andsolution of 1D transport equations for evolving fluid compositions along representative sets of streamlines, followed by a mapping back to the underlying pressure grid. The first step is referred to as the "pressure step" and is often the most time-consuming. Consequently, history matching and flow simulation are usually performed on upscaled simulation models, which imposes the need for a subsequent downscaling if the dynamic data are to be integrated in the geomodel. Upscaling and downscaling may result in loss of important fine-scale information.


2008 ◽  
Vol 11 (04) ◽  
pp. 759-767 ◽  
Author(s):  
C. Shah Kabir ◽  
Nidhal I. Mohammed ◽  
Manoj K. Choudhary

Summary Understanding reservoir behavior is the key to reservoir management. This study shows how energy modeling with rapid material-balance techniques, followed by numerical simulations with streamlines and finite-difference methods, aided understanding of reservoir-flow behavior. South Rumaila's long and elongated Zubair reservoir experiences uneven aquifer support from the western and eastern flanks. This uneven pressure support prompted injection in the weaker eastern flank to boost reservoir energy. We learned that aquifer influx provided nearly 95% of the reservoir's energy in its 50-year producing life, with water injection contributing less than 5% of the total energy supply. The west-to-east aquifer energy support is approximately 29:1, indicating the dominance of aquifer support in the west. Streamline simulations with a 663,000-cell model corroborated many of the findings learned during the material-balance phase of this study. Cursory adjustments to aquifer properties led to acceptable match with pulse-neutron capture or PNC-derived-time-lapse oil/water contact (OWC) surfaces. This global-matching approach speeded up the history-matching exercise in that performance of most wells was reproduced, without resorting to local adjustments of the cell properties. The history-matched model showed that the top layers contained the attic oil owing to lack of perforations. Lessons learned from this study include the idea that the material-balance work should precede any numerical flow-simulation study because it provides invaluable insights into reservoir-drive mechanisms and integrity of various input data, besides giving a rapid assessment of the reservoir's flow behavior. Credible material-balance work leaves very little room for adjustment of original hydrocarbons in place, which constitutes an excellent starting point for numerical models. Introduction Before the advent of widespread use of computers and numeric simulators, material-balance (MB) studies were the norm for reservoir management. In this context, Stewart et al. (1954), Irby et al. (1962), and McEwen (1962) presented useful studies. Most popular MB methods include those of Havlena and Odeh (1963), Campbell and Campbell (1978), and Tehrani (1985), among others. Pletcher (2002) provides a comprehensive review of the available MB techniques. In the modern era, classical MB studies seldom precede a full-field numeric modeling, presumably because MB is implicit in this approach. Nonetheless, we think valuable lessons can be learned from analytic MB studies at a fraction of time needed for detailed numeric modeling, preceded by geologic modeling. Of course, the value and amount of information derived from a multicell numeric model cannot be compared to a single-cell MB model. But, an analytic MB study can be an excellent precursor to any detailed 3D modeling effort. Although this point has been made by others (Dake 1994; Pletcher 2002), practice has, however, lagged conventional wisdom. In this paper, we attempt to show the value of a zero-dimensional MB study prior to doing detailed 3D numeric modeling, using both streamline and finite-difference methods. Streamline simulations speeded up the history-matching effort by a factor of three. However, we used the finite-difference approach in prediction runs for its greater flexibility in invoking various producing rules. Initially, the MB study provided key learnings about gross reservoir behavior very rapidly. In particular, energy contributions made by different drive mechanisms, such as uneven natural water influx and water injection, were of great interest for ongoing reservoir-management activities. Estimating in-place hydrocarbon volume and relative strength of the aquifer in the western and eastern flanks constituted key objectives of this study segment. Following the MB segment of the study, we pursued full-field match of historical data (pressure and OWC) with a streamline flow simulator to take advantage of rapid turnaround time. Thereafter, prediction runs were made with the finite-difference model to answer the ongoing water-injection question in the eastern flank of the reservoir. We learned that water injection should be turned off for improved sweep, leading to increased ultimate oil recovery. In addition, the numeric models identified the presence of remaining oil in the attic for future exploitation.


2020 ◽  
Vol 13 (2) ◽  
pp. 1
Author(s):  
E. M. Samogim ◽  
T. C. Oliveira ◽  
Z. N. Figueiredo ◽  
J. M. B. Vanini

The combine harvest for soybean crops market are currently available two types of combine with header or platform, one of conventional with revolving reel with metal or plastic teeth to cause the cut crop to fall into the auger header and the other called "draper" headers that use a fabric or rubber apron instead of a cross auger, there are few test about performance of this combine header for soybean in Mato Grosso State. The aim of this work was to evaluate the soybean harvesting quantitative losses and performance using two types combine header in four travel speed. The experiment was conducted during soybean crops season 2014/15, the farm Tamboril in the municipality of Pontes e Lacerda, State of Mato Grosso. The was used the experimental design of randomized blocks, evaluating four forward harvesting speeds (4 km h-1, 5 km h-1, 6 km h-1 and 7 km h-1), the natural crops losses were analyzed, loss caused by the combine harvester (combine header, internal mechanisms and total losses) and was also estimated the  field performance of each combine. Data were submitted to analysis of variance by F test and compared of the average by Tukey test at 5% probability. The results show the draper header presents a smaller amount of total loss and in most crop yield when compared with the conventional cross auger.


Author(s):  
Jian Liu ◽  
Yong Yu ◽  
Chenqi Zhu ◽  
Yu Zhang

The finite volume method (FVM)-based computational fluid dynamics (CFD) technology has been applied in the non-invasive diagnosis of coronary artery stenosis. Nonetheless, FVM is a time-consuming process. In addition to FVM, the lattice Boltzmann method (LBM) is used in fluid flow simulation. Unlike FVM solving the Navier–Stokes equations, LBM directly solves the simplified Boltzmann equation, thus saving computational time. In this study, 12 patients with left anterior descending (LAD) stenosis, diagnosed by CTA, are analysed using FVM and LBM. The velocities, pressures, and wall shear stress (WSS) predicted using FVM and LBM for each patient is compared. In particular, the ratio of the average and maximum speed at the stenotic part characterising the degree of stenosis is compared. Finally, the golden standard of LAD stenosis, invasive fractional flow reserve (FFR), is applied to justify the simulation results. Our results show that LBM and FVM are consistent in blood flow simulation. In the region with a high degree of stenosis, the local flow patterns in those two solvers are slightly different, resulting in minor differences in local WSS estimation and blood speed ratio estimation. Notably, these differences do not result in an inconsistent estimation. Comparison with invasive FFR shows that, in most cases, the non-invasive diagnosis is consistent with FFR measurements. However, in some cases, the non-invasive diagnosis either underestimates or overestimates the degree of stenosis. This deviation is caused by the difference between physiological and simulation conditions that remains the biggest challenge faced by all CFD-based non-invasive diagnostic methods.


Author(s):  
Karl E. Barth ◽  
Gregory K. Michaelson ◽  
Adam D. Roh ◽  
Robert M. Tennant

This paper is focused on the field performance of a modular press-brake-formed tub girder (PBFTG) system in short span bridge applications. The scope of this project to conduct a live load field test on West Virginia State Project no. S322-37-3.29 00, a bridge utilizing PBFTGs located near Ranger, West Virginia. The modular PBFTG is a shallow trapezoidal box girder cold-formed using press-brakes from standard mill plate widths and thicknesses. A technical working group within the Steel Market Development Institute’s Short Span Steel Bridge Alliance, led by the current authors, was charged with the development of this concept. Research of PBFTGs has included analyzing the flexural bending capacity using experimental testing and analytical methods. This paper presents the experimental testing procedures and performance of a composite PBFTG bridge.


Author(s):  
Mitsugu Yamaguchi ◽  
Tatsuaki Furumoto ◽  
Shuuji Inagaki ◽  
Masao Tsuji ◽  
Yoshiki Ochiai ◽  
...  

AbstractIn die-casting and injection molding, a conformal cooling channel is applied inside the dies and molds to reduce the cycle time. When the internal face of the channel is rough, both cooling performance and tool life are negatively affected. Many methods for finishing the internal face of such channels have been proposed. However, the effects of the channel diameter on the flow of a low-viscosity finishing media and its finishing characteristics for H13 steel have not yet been reported in the literature. This study addresses these deficiencies through the following: the fluid flow in a channel was computationally simulated; the flow behavior of abrasive grains was observed using a high-speed camera; and the internal face of the channel was finished using the flow of a fluid containing abrasive grains. The flow velocity of the fluid with the abrasive grains increases as the channel diameter decreases, and the velocity gradient is low throughout the channel. This enables reduction in the surface roughness for a short period and ensures uniform finishing in the central region of the channel; however, over polishing occurs owing to the centrifugal force generated in the entrance region, which causes the form accuracy of the channel to partially deteriorate. The outcomes of this study demonstrate that the observational finding for the finishing process is consistent with the flow simulation results. The flow simulation can be instrumental in designing channel diameters and internal pressures to ensure efficient and uniform finishing for such channels.


2022 ◽  
Vol 14 (1) ◽  
pp. 168781402110704
Author(s):  
Zhuang Dong ◽  
Jian Yang ◽  
Chendi Zhu ◽  
Dimitrios Chronopoulos ◽  
Tianyun Li

This study investigates the vibration power flow behavior and performance of inerter-based vibration isolators mounted on finite and infinite flexible beam structures. Two configurations of vibration isolators with spring, damper, and inerter as well as different rigidities of finite and infinite foundation structures are considered. Both the time-averaged power flow transmission and the force transmissibility are studied and used as indices to evaluate the isolation performance. Comparisons are made between the two proposed configurations of inerter-based isolators and the conventional spring-damper isolators to show potential performance benefits of including inerter for effective vibration isolation. It is shown that by configuring the inerter, spring, and damper in parallel in the isolator, anti-peaks are introduced in the time-averaged transmitted power and force transmissibility at specific frequencies such that the vibration transmission to the foundation can be greatly suppressed. When the inerter is connected in series with a spring-damper unit and then in-parallel with a spring, considerable improvement in vibration isolation can be achieved near the original peak frequency while maintaining good high-frequency isolation performance. The study provides better understanding of the effects of adding inerters to vibration isolators mounted on a flexible foundation, and benefits enhanced designs of inerter-based vibration suppression systems.


2021 ◽  
Author(s):  
Hasan Al-Ibadi ◽  
Karl Stephen ◽  
Eric Mackay

Abstract We introduce a pseudoisation method to upscale polymer flooding in order to capture the flow behaviour of fine scale models. This method is also designed to improve the predictability of pressure profiles during this process. This method controls the numerical dispersion of coarse grid models so that we are able to reproduce the flow behaviour of the fine scale model. To upscale polymer flooding, three levels of analysis are required such that we need to honour (a) the fractional flow solution, (b) the water and oil mobility and (c) appropriate upscaling of single phase flow. The outcome from this analysis is that a single pseudo relative permeability set that honours the modification that polymer applies to water viscosity modification without explicitly changing it. The shape of relative permeability can be chosen to honour the fractional flow solution of the fine scale using the analytical solution. This can result in a monotonic pseudo relative permeability set and we call it the Fractional-Flow method. To capture the pressure profile as well, individual relative permeability curves must be chosen appropriately for each phase to ensure the correct total mobility. For polymer flooding, changes to the water relative permeability included the changes to water viscosity implicitly thus avoiding the need for inclusion of a polymer solute. We call this type of upscaling as Fractional-Flow-Mobility control method. Numerical solution of the upscaled models, obtained using this method, were validated against fine scale models for 1D homogenous model and as well as 3D models with randomly distributed permeability for various geological realisations. The recovery factor and water cut matched the fine scale model very well. The pressure profile was reasonably predictable using the Fractional-Flow-Mobility control method. Both Fractional-Flow and Fractional-flow-Mobility control methods can be calculated in advance without running a fine scale model where the analysis is based on analytical solution even though produced a non-monotonic pseudo relative permeability curve. It simplified the polymer model so that it is much easier and faster to simulate. It offers the opportunity to quickly predict oil and water phase behaviour.


Sign in / Sign up

Export Citation Format

Share Document