A Parallel Reservoir Simulator for Large-Scale Reservoir Simulation

2002 ◽  
Vol 5 (01) ◽  
pp. 11-23 ◽  
Author(s):  
A.H. Dogru ◽  
H.A. Sunaidi ◽  
L.S. Fung ◽  
W.A. Habiballah ◽  
N. Al-Zamel ◽  
...  

Summary A new parallel, black-oil-production reservoir simulator (Powers**) has been developed and fully integrated into the pre- and post-processing graphical environment. Its primary use is to simulate the giant oil and gas reservoirs of the Middle East using millions of cells. The new simulator has been created for parallelism and scalability, with the aim of making megacell simulation a day-to-day reservoir-management tool. Upon its completion, the parallel simulator was validated against published benchmark problems and other industrial simulators. Several giant oil-reservoir studies have been conducted with million-cell descriptions. This paper presents the model formulation, parallel linear solver, parallel locally refined grids, and parallel well management. The benefits of using megacell simulation models are illustrated by a real field example used to confirm bypassed oil zones and obtain a history match in a short time period. With the new technology, preprocessing, construction, running, and post-processing of megacell models is finally practical. A typical history- match run for a field with 30 to 50 years of production takes only a few hours. Introduction With the development of early parallel computers, the attractive speed of these computers got the attention of oil industry researchers. Initial questions were concentrated along these lines:Can one develop a truly parallel reservoir-simulator code?What type of hardware and programming languages should be chosen? Contrary to seismic, it is well known that reservoir simulator algorithms are not naturally parallel; they are more recursive, and variables display a strong dependency on each other (strong coupling and nonlinearity). This poses a big challenge for the parallelization. On the other hand, if one could develop a parallel code, the speed of computations would increase by at least an order of magnitude; as a result, many large problems could be handled. This capability would also aid our understanding of the fluid flow in a complex reservoir. Additionally, the proper handling of the reservoir heterogeneities should result in more realistic predictions. The other benefit of megacell description is the minimization of upscaling effects and numerical dispersion. The megacell simulation has a natural application in simulating the world's giant oil and gas reservoirs. For example, a grid size of 50 m or less is used widely for the small and medium-size reservoirs in the world. In contrast, many giant reservoirs in the Middle East use a gridblock size of 250 m or larger; this easily yields a model with more than 1 million cells. Therefore, it is of specific interest to have megacell description and still be able to run fast. Such capability is important for the day-to-day reservoir management of these fields. This paper is organized as follows: the relevant work in the petroleum-reservoir-simulation literature has been reviewed. This will be followed by the description of the new parallel simulator and the presentation of the numerical solution and parallelism strategies. (The details of the data structures, well handling, and parallel input/output operations are placed in the appendices). The main text also contains a brief description of the parallel linear solver, locally refined grids, and well management. A brief description of megacell pre- and post-processing is presented. Next, we address performance and parallel scalability; this is a key section that demonstrates the degree of parallelization of the simulator. The last section presents four real field simulation examples. These example cases cover all stages of the simulator and provide actual central processing unit (CPU) execution time for each case. As a byproduct, the benefits of megacell simulation are demonstrated by two examples: locating bypassed oil zones, and obtaining a quicker history match. Details of each section can be found in the appendices. Previous Work In the 1980s, research on parallel-reservoir simulation had been intensified by the further development of shared-memory and distributed- memory machines. In 1987, Scott et al.1 presented a Multiple Instruction Multiple Data (MIMD) approach to reservoir simulation. Chien2 investigated parallel processing on sharedmemory computers. In early 1990, Li3 presented a parallelized version of a commercial simulator on a shared-memory Cray computer. For the distributed-memory machines, Wheeler4 developed a black-oil simulator on a hypercube in 1989. In the early 1990s, Killough and Bhogeswara5 presented a compositional simulator on an Intel iPSC/860, and Rutledge et al.6 developed an Implicit Pressure Explicit Saturation (IMPES) black-oil reservoir simulator for the CM-2 machine. They showed that reservoir models over 2 million cells could be run on this type of machine with 65,536 processors. This paper stated that computational speeds in the order of 1 gigaflop in the matrix construction and solution were achievable. In mid-1995, more investigators published reservoir-simulation papers that focused on distributed-memory machines. Kaarstad7 presented a 2D oil/water research simulator running on a 16384 processor MasPar MP-2 machine. He showed that a model problem using 1 million gridpoints could be solved in a few minutes of computer time. Rame and Delshad8 parallelized a chemical flooding code (UTCHEM) and tested it on a variety of systems for scalability. This paper also included test results on Intel iPSC/960, CM-5, Kendall Square, and Cray T3D.

Author(s):  
Anita Theresa Panjaitan ◽  
Rachmat Sudibjo ◽  
Sri Fenny

<p>Y Field which located around 28 km south east of Jakarta was discovered in 1989. Three wells have been drilled and suspended. The initial gas ini place (IGIP) of the field is 40.53 BSCF. The field will be developed in 2011. In this study, reservoir simulation model was made to predict the optimum development strategy of the field. This model consisted of 1,575,064 grid cells which were built in a black oil simulator. Two field development scenarios were defined with and without compressor. Simulation results show that the Recovery Factor at thel end of the contract is 61.40% and 62.14% respectively for Scenarios I and II without compressor. When compressor is applied then Recovey Factor of Scenarios I and II is 68.78% and 74.58%, correspondingly. Based on the economic parameters, Scenario II with compressor is the most <br />attractive case, where IRR, POT, and NPV of the scenario are 41%, 2.9 years, and 14,808 MUS$.</p>


2001 ◽  
Vol 4 (02) ◽  
pp. 114-120 ◽  
Author(s):  
V.J. Zapata ◽  
W.M. Brummett ◽  
M.E. Osborne ◽  
D.J. Van Nispen

Summary One of the most perplexing and difficult challenges in the industry is deciding how to develop a new oil or gas field. It is necessary to estimate recoverable reserves, design the most efficient exploitation strategy, decide where and when to drill wells and install surface facilities, and predict the rate of production. This requires a clear understanding of energy distribution and fluid movements throughout the entire system, under any given operational scenario or market-demand situation. Even after a reservoir-development plan is selected, there are many possible facility designs, each with different investment and operating costs. An important, but not always considered, fact is that each facility scheme could result in different future production rates owing to various types, sizes, and configurations of fluid-flow facilities. Selecting the best design for the asset requires the most accurate production forecasts possible over the forecast life cycle. No other single technology has the ability to provide this insight, as well as tightly coupled reservoir and facility simulation, because it combines all pertinent geological and engineering data into a single, comprehensive, dynamic model of the entire oilfield flow system. An integrated oilfield simulation system accounts for all dynamic flow effects and provides a test environment for quickly and accurately comparing alternative designs. This paper provides a brief background of this technology and gives a review of a major development project where it is currently being applied. Finally, we describe some recent significant advances in the technology that make it more stable, accurate, and rigorous. Introduction Finite-difference reservoir simulation is widely used to predict production performance of oil and gas fields. This is usually done in a "stand-alone" mode, where individual well performance is commonly calculated from pregenerated multiphase wellbore flow tables that cover various ranges of wellhead and bottomhole pressures, gas/oil ratios (GOR's) and water/oil ratios (WOR's). The reservoir simulator determines the predicted production rate from these tables, normally assuming a fixed wellhead pressure and using a flowing bottomhole pressure calculated by the reservoir simulator. With this scheme it is not possible to consider the changing flow-resistance effects of the piping system as various fluids merge or split in the surface network. Neglecting this interaction of the surface network can, in many cases, introduce substantial errors into predicted performance. Basing multimillion- (in some cases, billion-) dollar exploitation designs on performance predictions that are suboptimal can be very detrimental to the asset's long-range profitability. To help eliminate this problem, considerable attention is being given to coupling reservoir simulators and multiphase facility network simulators to improve the accuracy of forecasting. Landscape Surface-network simulation technology was first introduced in 1976.1 Although successfully applied in selected cases, the concept was not widely adopted because of the excessive additional computing demands on computers of that era. In those earlier applications, the time consumed by the facility calculations could actually exceed the reservoir calculations.2,3 As computer performance has increased by orders of magnitude, this has become less of an issue. Reservoir model sizes have increased dramatically with much finer grids that take advantage of the increased computer power, but there was no need for a corresponding increase in the size of the facility models. Today, with tightly coupled reservoir/wellbore/surface models, the facility calculations are a fairly small part of the overall computing time and there is considerable effort in the industry to build these types of systems.4,5 Chevron's current tightly coupled oilfield simulation system is CHEARS®***/PIPESOFT-2™****. CHEARS® is a fully implicit 3D reservoir simulator with black-oil, compositional, thermal, miscible, and polymer formulations. It has fully implicit dual porosity, dual permeability options, and unlimited multiple-level local grid refinement. PIPESOFT-2™ is a comprehensive multiphase wellbore/surface-network simulator. It has black-oil, compositional, CO2, steam, and non-Newtonian fluid capabilities. It can solve any type of complex nested looping, both surface and subsurface. The coupling is done at the wellbore completion interval, which is the natural domain boundary between the flow systems. We refer to our implementation as "tightly coupled" because information is dynamically exchanged directly between the simulators without any intermediate intervention. A simple representation of the interaction between the simulators is shown in Fig. 1. Gorgon Field Example The following is an example of how this technology is currently being used. The Gorgon field is a Triassic gas accumulation estimated to contain over 20 Tscf of gas, located 130 km offshore northwest Australia in 300 m of water (Fig. 2). It is currently undergoing development studies for an LNG project. Field and Model Description. The field is 45 km long and 9 km wide, and it comprises more than 2000 m of Triassic fluvial Mungaroo formation in angular discordance with a Jurassic-age unconformity. It has been subdivided into 11 vertical intervals (or zones) on the basis of regional sequence boundaries and depositional systems. These 11 zones were first modeled individually with an object-based modeling technique before being stacked into a 715-layer full-field geologic model. This model was subsequently scaled up to a 46-layer reservoir simulation model, reducing the size of the model from 4.5 million cells to 290,000 cells. While the scaleup process preserved the original 11 zone boundaries, the majority of the layers were located in regions identified as key flow units. In addition to vertical subdivision, seismic and appraisal well data suggest structural compartmentalization, resulting in six major fault blocks. After deactivating appropriate cells, the final simulation model contained 50,000 active cells and was initialized with 35 independent pressure regions. Each of these regions corresponds to a single zone in a single fault block.


1984 ◽  
Vol 24 (1) ◽  
pp. 170
Author(s):  
S. T. Henzcll A. A Young A K. Khurana

A three-dimensional, single-phase reservoir simulation model of the entire Gippsland Basin aquifer system, together with its oil and gas reservoirs, was first developed in 1973. It was replaced by an improved version in 1975. Now, after fifteen years of production, pressure predictions from the model still compare very well with data obtained from current exploration and development wells.The model, consisting of 4186 grid blocks, incorporates the geological description, and pressure and fluid distribution of the basin. The geological description includes porosity, net-to-gross ratio and permeability with fluid properties representing the aquifer. A well-established initial pressure/depth relationship for the Gippsland Basin is included in the model. Although it is a single-phase model, oil and gas reservoirs are represented by pseudo rock and fluid properties.The model is regularly updated with historical and forecast production rates in order to predict pressure behaviour and therefore aquifer strength in various areal and stratigraphic locations in the basin. Such information is essential for defining external boundary conditions in individual reservoir simulation models and assists in gas deliverability forecasts. In exploration wells, measured pressures are compared with model predictions to help understand the degree of pressure communication with the basin aquifer and hence the level of pressure support. Detailed predictions of the pressure gradients expected in both exploration and development wells are often of assistance in identifying fluid contacts, overpressure and reservoirs with limited communication with the aquifer.


2021 ◽  
Author(s):  
Usuf Middya ◽  
Abdulrahman Manea ◽  
Maitham Alhubail ◽  
Todd Ferguson ◽  
Thomas Byer ◽  
...  

Abstract Reservoir simulation computational costs have been continuously growing due to high-resolution reservoir characterization, increasing model complexity, and uncertainty analysis workflows. Reducing simulation costs by upscaling is often necessary for operational requirements. Fast evolving HPC technologies offer opportunities to reduce cost without compromising fidelity. This work presents a novel in-house massively parallel full-physics reservoir simulator running on the emerging GPU architecture. Almost all the simulation kernels have been designed and implemented to honor the GPU SIMD programming paradigm. These kernels include physical property calculations, phase equilibrium computations, Jacobian construction, linear and nonlinear solvers, and wells. Novel techniques are devised in various kernels to expose enough parallelism to ensure that the control and data-flow patterns are well suited for the GPU environment. Mixed-precision computation is also employed when appropriate (e.g., in derivative calculation) to reduce computational costs without compromising the solution accuracy. The GPU implementation of the simulator is tested and benchmarked using various reservoir models, ranging from the synthetic SPE10 Benchmark (Christie & Blunt, 2001) to several industrial-scale models. These real field models range in size from tens of millions of cells to more than billion cells with black-oil and multicomponent compositional fluid. The GPU simulator is benchmarked on the IBM AC922 massively parallel architecture having tens of NVidia Volta V100 GPUs. To compare performance with CPU architectures, an optimized CPU implementation of the simulator is benchmarked on the IBM AC922 CPUs and on a cluster consisting of thousands of Intel's Haswell-EP Xeon® CPU E5-2680 v3. Detailed analysis of several numerical experiments comparing the simulator performance on the GPU and the CPU architectures is presented. In almost all of the cases, the analysis shows that the use of hardware acceleration offers substantial benefits in terms of wall time and power consumption. This novel in-house full-physics, black-oil and compositional reservoir simulator employs several novel techniques in various simulation kernels to ensure full utilization of the GPU resources. Detailed analysis is presented to highlight the simulator performance in terms of runtime reduction, parallel scalability and power savings.


2015 ◽  
Author(s):  
A. Kozlova ◽  
Z. Li ◽  
J.R. Natvig ◽  
S. Watanabe ◽  
Y. Zhou ◽  
...  

SPE Journal ◽  
2016 ◽  
Vol 21 (06) ◽  
pp. 2049-2061 ◽  
Author(s):  
A.. Kozlova ◽  
Z.. Li ◽  
J. R. Natvig ◽  
S.. Watanabe ◽  
Y.. Zhou ◽  
...  

Summary Simulation technology is constantly evolving to take advantage of the best-available computational algorithms and computing hardware. A new technology is being jointly developed by an integrated energy company and a service company to provide a step change to reservoir-simulator performance. Multiscale methods have been rapidly developed during the past few years. Multiscale technology promises to improve simulation run time by an order of magnitude compared with current simulator performance in traditional reservoir-engineering work flows. Following that trend, the two companies have been working in collaboration on a multiscale algorithm that significantly increases performance of reservoir simulators. In this paper, we report the development of multiscale black-oil reservoir-simulation technology in a reservoir simulator used by the industry, as well as the performance and accuracy of the results obtained by use of this implementation. The multiscale method has proved to be accurate and reliable for large real-data models, and the new solver is capable of solving very-large models an order of magnitude faster than the current commercial version of the solver.


2007 ◽  
Vol 10 (05) ◽  
pp. 489-499 ◽  
Author(s):  
Kassem Ghorayeb ◽  
Jonathan Anthony Holmes

Summary Black-oil reservoir simulation still has wide application in the petroleum industry because it is far less demanding computationally than compositional simulation. But a principal limitation of black-oil reservoir simulation is that it does not provide the detailed compositional information necessary for surface process modeling. Black-oil delumping overcomes this limitation by converting a black-oil wellstream into a compositional wellstream, enabling the composition and component molar rates of a production well in a black-oil reservoir simulation to be reconstituted. We present a comprehensive black-oil delumping method based primarily on the compositional information generated in the depletion process that is used initially to provide data for the black-oil simulation in a typical workflow. Examples presented in this paper show the accuracy of this method in different depletion processes: natural depletion, water injection, and gas injection. The paper also presents a technique for accurately applying the black-oil delumping method to wells encountering crossflow. Introduction With advances in computing speed, it is becoming more typical to use a fully compositional fluid description in hydrocarbon reservoir simulation. However, the faster computers become, the stronger the simulation engineer's tendency to build more challenging (and thus more CPU intensive) models. Compositional simulation in today's multi-million-cell models is still practically unfeasible. Black-oil fluid representation is a proven technique that continues to find wide application in reservoir simulation. However, an important limitation of black-oil reservoir simulation is the lack of detailed compositional information necessary for surface process modeling. The black-oil delumping technique described in this paper provides the needed compositional information, yet adds negligible computational time to the simulation. Delumping a black-oil wellstream consists of retrieving the detailed components' molar rates to convert the black-oil wellstream into a compositional wellstream. It reconstitutes the composition and component molar rates of the production stream. Black-oil delumping can be achieved with differing degrees of accuracy by using options ranging from setting a constant oil and gas composition for the whole run to using the results of a depletion process: constant-volume depletion (CVD), constant-composition expansion (CCE), and differential liberation (DL). The simplest method is to assign a fixed composition (component mole fraction) to stock-tank oil and gas. This could be applied over the whole reservoir, or, if the hydrocarbon mixture properties vary across the reservoir, different oil and gas compositions can be reassigned at any time during the run. Some black-oil simulators have an API tracking feature that allows oils of different properties to mix within the reservoir. The pressure/volume/temperature (PVT) properties of the oil mixture are parameterized with the oil surface density. To provide a delumping option compatible with the API tracking, stock-tank oil and gas compositions may be tabulated against the density of oil at surface conditions.


Sign in / Sign up

Export Citation Format

Share Document