Practical Use of Scale Up and Parallel Reservoir Simulation Technologies in Field Studies

1999 ◽  
Vol 2 (04) ◽  
pp. 368-376 ◽  
Author(s):  
H.A. Tchelepi ◽  
L.J. Durlofsky ◽  
W.H. Chen ◽  
A. Bernath ◽  
M.C.H. Chien

Summary Scale up and parallel reservoir simulation represent two distinct approaches for the simulation of highly detailed geological or geostatistical reservoir models. In this paper, we discuss the complementary use of these two approaches for practical, large scale reservoir simulation problems. We first review our recently developed approaches for upscaling and parallel reservoir simulation. Then, several practical large scale modeling problems, which include simulations of multiple realizations of a waterflood pattern element, a four well sector model, and a large, 130 well segment model, are addressed. It is shown that, for the pattern waterflood model, significantly coarsened models provide reliable results for many aspects of the reservoir flow. However, the simulation of at least some of the fine scale geostatistical realizations, accomplished using our parallel reservoir simulation technology, is useful in determining the appropriate level of scale up. For models with a large number of wells, the upscaled models can lose accuracy as the grid is coarsened. In these cases, although field-wide performance can still be predicted with reasonable accuracy, parallel reservoir simulation is required to maintain sufficiently refined models capable of accurate flow results on a well by well basis. Finally, some issues concerning the use of highly detailed models in practical simulation studies are discussed. Introduction Reservoir description and flow modeling capabilities continue to benefit from advances in computing hardware and software technologies. However, the level of detail typically included in reservoir characterizations continues to exceed the capabilities of traditional reservoir flow simulators by a significant margin. This resolution gap, due to the much larger computational requirements of flow simulation, has driven the development of two specific technologies: scale up and parallel reservoir simulation. These two technologies represent very distinct approaches—scale up methods attempt to coarsen the simulation model to fit the hardware, while parallel reservoir simulation technology attempts to extend computing capabilities to accommodate the detailed model. The purpose of this paper is to present and discuss ways in which to utilize these two technologies in a complementary fashion for the solution of practical large scale reservoir simulation problems. Toward this end, we first discuss our previously developed capabilities for scale up1,2 and parallel reservoir simulation.3 Next, the two technologies are applied to several reservoirs represented via highly detailed (i.e., on the order of 1 million cells) geostatistical models. Various production scenarios are considered. It will be shown how the direct simulation of the highly detailed models (using parallel reservoir simulation technology on an IBM SP) can be used to assess and guide the scale up procedure and to establish the appropriate level of coarsening allowable. We will show that, once this level is established, upscaled models can be used to evaluate multiple geostatistical realizations. We additionally apply the detailed simulation results to develop general guidelines for the degree of scale up allowable for various types of simulation models; e.g., pattern, sector and large segment models. Our general conclusion is that our scale up technology, as currently used, is quite reliable when sufficient refinement is maintained in the coarsened model. We show that when many wells are to be simulated, the upscaled models can begin to lose accuracy, particularly when well by well production is considered. This is due in part to the fact that, in the coarse models, wells are separated by very few grid blocks, and degradation in accuracy results. There have been many previous studies directed toward the development of parallel reservoir simulation technology and many studies aimed at the development of scale up techniques. To our knowledge, this is the first effort that considers the complementary use of both. Here we will very briefly review the recent literature on both parallel reservoir simulation and upscaling techniques. For more complete discussions of previous work, refer to Refs. 1-3. Traditional techniques for upscaling rely on the use of pseudorelative permeabilities. Although often applied in practice, the use of pseudorelative permeabilities can lead to inaccuracies in some cases.4,5 This is largely due to the high degree of process dependency inherent in the pseudorelative permeability approach; i.e., pseudorelative permeability curves are really only appropriate for the conditions for which they are generated. The deficiencies in the traditional pseudorelative permeability methodology have motivated work in several areas. This includes the generation of more robust pseudorelative permeabilities,6,7 the use of higher moments of the fine scale variables,5 and the nonuniform coarsening approach applied in this study (discussed in Nonuniform Coarsening Method for Scale Up). Generalizations of the nonuniform coarsening approach described in Refs. 1 and 2 have also been presented.8,9 Parallel reservoir simulation is an area of active research. Recent publications emphasize the development of scalable algorithms designed to run efficiently on a variety of parallel platforms.10–13 Most recent implementations involve distributed memory platforms such as a cluster of workstations. The typical size of a simulation model run in parallel is on the order of 1 (or a few) million grid blocks, though results for a 16.5 million cell model have been reported.11 Most parallel implementations are based on message passing techniques such as the message passing interface standard (MPI). Several of the parallel simulation algorithms, including our own, are based on a multilevel domain decomposition approach. This entails communication between domains in a manner analogous to that used in standard domain decomposition approaches.

2021 ◽  
Vol 119 (1) ◽  
pp. e2113750119
Author(s):  
Arthur N. Montanari ◽  
Chao Duan ◽  
Luis A. Aguirre ◽  
Adilson E. Motter

The quantitative understanding and precise control of complex dynamical systems can only be achieved by observing their internal states via measurement and/or estimation. In large-scale dynamical networks, it is often difficult or physically impossible to have enough sensor nodes to make the system fully observable. Even if the system is in principle observable, high dimensionality poses fundamental limits on the computational tractability and performance of a full-state observer. To overcome the curse of dimensionality, we instead require the system to be functionally observable, meaning that a targeted subset of state variables can be reconstructed from the available measurements. Here, we develop a graph-based theory of functional observability, which leads to highly scalable algorithms to 1) determine the minimal set of required sensors and 2) design the corresponding state observer of minimum order. Compared with the full-state observer, the proposed functional observer achieves the same estimation quality with substantially less sensing and fewer computational resources, making it suitable for large-scale networks. We apply the proposed methods to the detection of cyberattacks in power grids from limited phase measurement data and the inference of the prevalence rate of infection during an epidemic under limited testing conditions. The applications demonstrate that the functional observer can significantly scale up our ability to explore otherwise inaccessible dynamical processes on complex networks.


SPE Journal ◽  
2014 ◽  
Vol 19 (05) ◽  
pp. 832-844 ◽  
Author(s):  
Faruk O. Alpak ◽  
Frans van der Vlugt

Summary A set of algorithms, called the shale-drape function (SDF), has been developed that incorporates bounding shales (shale drapes) for channels, channel belts (also known as meander belts), lobes and lobe complexes in 3D geologic models used for reservoir simulation. Shale drapes can have a significant impact on the recovery efficiency of clastic reservoirs. Therefore, they need to be modeled when present in significant quantities (in general, more than 50 to 70% in terms of areal coverage). The function incorporates shale drapes into a geologic model with an iterative process that creates shale layers over the entire surface of reservoir objects and then places ellipsoid-shaped holes into shale surfaces until a desired areal coverage is reached. The workflow for application recommends to grid the simulation model along the boundaries of stratigraphic objects, thereby ensuring that the shales can be realistically represented in the fine-scale geomodel and preserved in the post-upscaling simulation model.


SPE Journal ◽  
2008 ◽  
Vol 13 (04) ◽  
pp. 382-391 ◽  
Author(s):  
Vibeke Eilwn J. Haugen ◽  
Geir Naevdal ◽  
Lars-Joergen Natvik ◽  
Geir Evensen ◽  
Aina M. Berg ◽  
...  

Summary This paper applies the ensemble Kalman filter (EnKF) to history match a North Sea field model. This is, as far as we know, one of the first published studies in which the EnKF is applied in a realistic setting using real production data. The reservoir-simulation model has approximately 45,000 active grid cells, and 5 years of production data are assimilated. The estimated parameters consist of the permeability and porosity fields, and the results are compared with a model previously established using a manual history-matching procedure. It was found that the EnKF estimate improved the match to the production data. This study, therefore, supported previous findings when using synthetic models that the EnKF may provide a useful tool for history matching reservoir parameters such as the permeability and porosity fields. Introduction The EnKF developed by Evensen (1994, 2003, 2007) is a statistical method suitable for data assimilation in large-scale nonlinear models. It is a Monte Carlo method, where model uncertainty is represented by an ensemble of realizations. The prediction of the estimate and uncertainty is performed by ensemble integration using the reservoir-simulation model. The method provides error estimates at any time based on information from the ensemble. When production data are available, a variance-minimizing scheme is used to update the realizations. The EnKF provides a general and model-independent formulation and can be used to improve the estimates of both the parameters and variables in the model. The method has previously been applied in a number of applications [e.g., in dynamical ocean models (Haugen and Evensen 2002), in model systems describing the ocean ecosystems (Natvik and Evensen 2003a, 2003b), and in applications within meteorology (Houtekamer et al. 2005)]. This shows that the EnKF is capable of handling different types of complex- and nonlinear-model systems. The method was first introduced into the petroleum industry in studies related to well-flow modeling (Lorentzen et al. 2001, 2003). Nævdal et al. (2002) used the EnKF in a reservoir application to estimate model permeability focusing on a near-well reservoir model. They showed that there could be a great benefit from using the EnKF to improve the model through parameter estimation, and that this could lead to improved predictions. Nævdal et al. (2005) showed promising results estimating the permeability as a continuous field variable in a 2D field-like example. Gu and Oliver (2005) examined the EnKF for combined parameter and state estimation in a standardized reservoir test case. Gao et al. (2006) compared the EnKF with the randomized-maximum-likelihood method and pointed out several similarities between the methods. Liu and Oliver (2005a, 2005b) examined the EnKF for facies estimation in a reservoir-simulation model. This is a highly nonlinear problem where the probability-density function for the petrophysical properties becomes multimodal, and it is not clear how the EnKF can best handle this. A method was proposed in which the facies distribution for each ensemble member is represented by two normal distributed Gaussian fields using a method called truncated pluri-Gaussian simulation (Lantuéjoul 2002). Wen and Chen (2006) provided another discussion on the EnKF for estimation of the permeability field in a 2D reservoir-simulation model and examined the effect of the ensemble size. Lorentzen et al. (2005) focused on the sensitivity of the results with respect to the choice of initial ensemble using the PUNQ-S3. Skjervheim et al. (2007) used the EnKF to assimilate seismic 4D data. It was shown that the EnKF can handle these large data sets and that a positive impact could be found despite the high noise level in the data. The EnKF has some important advantages when compared to traditional assisted history-matching methods; the result is an ensemble of history-matched models that are all possible model realizations. The data are processed sequentially in time, meaning that new data are easily accounted for when they arrive. The method allows for simultaneous estimation of a huge number of poorly known parameters such as fields of properties defined in each grid cell. By analyzing the EnKF update equations, it is seen that the actual degrees of freedom in the estimation problem are limited equal to the ensemble size. One is still able to update the most important features of large-scale models. A limitation of the EnKF is the fact that its computations are based on first- and second-order moments, and there are problems that are difficult to handle, particularly when the probability distributions are multimodal (e.g., when representing a bimodal channel facies distribution). This paper considers the use of the EnKF for estimating dynamic and static parameters, focusing on permeability and porosity, in a field model of a StatoilHydro-operated field in the North Sea. The largest uncertainty in the model is expected to be related to the permeability values, especially in the upper part of the reservoir where the uncertainty may be as large as 30%.


2021 ◽  
Author(s):  
Humberto Parra ◽  
Kristian Mogensen ◽  
Abdulla Alobeidli

Abstract Reservoir simulation models aim to reproduce at well, sector and field level the pressure and production behavior observed in the historical data. The size and resolution of the models are essentially capped by the computational resources as the numerical computations are quite complex and hardware demanding. For this reason, the use of simulation models to understand inter-field communications at regional level have been always a challenge, rarely pursued, referring those analyses to simple material balance to evaluate influxes, lacking lateral vectors to identify where volumes are coming from, especially on cases of multiple field interactions. The work presented in this paper illustrates the value of merging existing field level simulations models into a large scale regional simulation grids, in order to understand pressure disturbances observed in multiple fields Offshore Abu Dhabi. The process of merging simulation models represents a big challenge considering the high variety of approaches used in the original models, different geology complexity, fluid characteristics, different depletion regimes and field development strategies. In this study, thousands of wells, 6 structures with different fluid and equilibrium regions were used to build the biggest reservoir simulation model in Abu Dhabi. The integration of the data pursues the replication of the existing static and dynamic models, addressing in parallel lateral and vertical upscaling issues when moving from very fine into coarser grids. Implications on the change of scale on the repeatability of the HCIIP volumes and the impact of pseudo relative permeability curves on the history match were carefully analyzed during the process. Evaluation of the impact of the simplifications over the overall quality of the model was of paramount importance, interrogating whether the simplifications affects the capability of the model for assessing the pressure communication and influxes among the fields. The regional simulation model allowed to understand the effects of the peripheral water injection of a giant field on the nearby satellite fields, also the effects of these interactions on the pressure and oil saturation changes through time. Fields and Structures separated way far (20 and 40 Km away) can eventually see pressure disturbances after very long periods of time (up to 300 psi in couple of decades in some cases). Although evidences for changes in pressure are very clear and supported by RFT/MDT time lapsed data, the work also proved that changes on saturations are not very evident or can be considered very marginal on fields separated by large distances. This work represents an alternative and more accurate approach for evaluating nearby field communications and to quantify the boundary conditions to restore models at original stage before nearby interferences, allowing proper initialization of the fine scaled simulation models on pre-production status.


Author(s):  
S. Pragati ◽  
S. Kuldeep ◽  
S. Ashok ◽  
M. Satheesh

One of the situations in the treatment of disease is the delivery of efficacious medication of appropriate concentration to the site of action in a controlled and continual manner. Nanoparticle represents an important particulate carrier system, developed accordingly. Nanoparticles are solid colloidal particles ranging in size from 1 to 1000 nm and composed of macromolecular material. Nanoparticles could be polymeric or lipidic (SLNs). Industry estimates suggest that approximately 40% of lipophilic drug candidates fail due to solubility and formulation stability issues, prompting significant research activity in advanced lipophile delivery technologies. Solid lipid nanoparticle technology represents a promising new approach to lipophile drug delivery. Solid lipid nanoparticles (SLNs) are important advancement in this area. The bioacceptable and biodegradable nature of SLNs makes them less toxic as compared to polymeric nanoparticles. Supplemented with small size which prolongs the circulation time in blood, feasible scale up for large scale production and absence of burst effect makes them interesting candidates for study. In this present review this new approach is discussed in terms of their preparation, advantages, characterization and special features.


2020 ◽  
Vol 27 (2) ◽  
pp. 105-110 ◽  
Author(s):  
Niaz Ahmad ◽  
Muhammad Aamer Mehmood ◽  
Sana Malik

: In recent years, microalgae have emerged as an alternative platform for large-scale production of recombinant proteins for different commercial applications. As a production platform, it has several advantages, including rapid growth, easily scale up and ability to grow with or without the external carbon source. Genetic transformation of several species has been established. Of these, Chlamydomonas reinhardtii has become significantly attractive for its potential to express foreign proteins inexpensively. All its three genomes – nuclear, mitochondrial and chloroplastic – have been sequenced. As a result, a wealth of information about its genetic machinery, protein expression mechanism (transcription, translation and post-translational modifications) is available. Over the years, various molecular tools have been developed for the manipulation of all these genomes. Various studies show that the transformation of the chloroplast genome has several advantages over nuclear transformation from the biopharming point of view. According to a recent survey, over 100 recombinant proteins have been expressed in algal chloroplasts. However, the expression levels achieved in the algal chloroplast genome are generally lower compared to the chloroplasts of higher plants. Work is therefore needed to make the algal chloroplast transformation commercially competitive. In this review, we discuss some examples from the algal research, which could play their role in making algal chloroplast commercially successful.


2021 ◽  
Vol 102 (8) ◽  
pp. 8-13
Author(s):  
Thomas Hatch

Taking advantage of the possibilities for learning outside of school requires us to build on what we know about why it is so hard to sustain and scale up unconventional educational experiences within conventional schools. To illustrate the opportunities and challenges, Thomas Hatch describes a large-scale approach to project-based learning developed in a camp in New Hampshire and incorporated in a Brooklyn school, a trip-based program in Detroit, and Singapore’s systemic embrace of learning outside school. By understanding the conditions that can sustain alternative instructional practices, educators can find places to challenge the boundaries of schooling and create visions of the possible that exceed current constraints.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mulalo M. Muluvhahothe ◽  
Grant S. Joseph ◽  
Colleen L. Seymour ◽  
Thinandavha C. Munyai ◽  
Stefan H. Foord

AbstractHigh-altitude-adapted ectotherms can escape competition from dominant species by tolerating low temperatures at cooler elevations, but climate change is eroding such advantages. Studies evaluating broad-scale impacts of global change for high-altitude organisms often overlook the mitigating role of biotic factors. Yet, at fine spatial-scales, vegetation-associated microclimates provide refuges from climatic extremes. Using one of the largest standardised data sets collected to date, we tested how ant species composition and functional diversity (i.e., the range and value of species traits found within assemblages) respond to large-scale abiotic factors (altitude, aspect), and fine-scale factors (vegetation, soil structure) along an elevational gradient in tropical Africa. Altitude emerged as the principal factor explaining species composition. Analysis of nestedness and turnover components of beta diversity indicated that ant assemblages are specific to each elevation, so species are not filtered out but replaced with new species as elevation increases. Similarity of assemblages over time (assessed using beta decay) did not change significantly at low and mid elevations but declined at the highest elevations. Assemblages also differed between northern and southern mountain aspects, although at highest elevations, composition was restricted to a set of species found on both aspects. Functional diversity was not explained by large scale variables like elevation, but by factors associated with elevation that operate at fine scales (i.e., temperature and habitat structure). Our findings highlight the significance of fine-scale variables in predicting organisms’ responses to changing temperature, offering management possibilities that might dilute climate change impacts, and caution when predicting assemblage responses using climate models, alone.


Energies ◽  
2021 ◽  
Vol 14 (10) ◽  
pp. 2833
Author(s):  
Paolo Civiero ◽  
Jordi Pascual ◽  
Joaquim Arcas Abella ◽  
Ander Bilbao Figuero ◽  
Jaume Salom

In this paper, we provide a view of the ongoing PEDRERA project, whose main scope is to design a district simulation model able to set and analyze a reliable prediction of potential business scenarios on large scale retrofitting actions, and to evaluate the overall co-benefits resulting from the renovation process of a cluster of buildings. According to this purpose and to a Positive Energy Districts (PEDs) approach, the model combines systemized data—at both building and district scale—from multiple sources and domains. A sensitive analysis of 200 scenarios provided a quick perception on how results will change once inputs are defined, and how attended results will answer to stakeholders’ requirements. In order to enable a clever input analysis and to appraise wide-ranging ranks of Key Performance Indicators (KPIs) suited to each stakeholder and design phase targets, the model is currently under the implementation in the urbanZEB tool’s web platform.


2021 ◽  
pp. 037957212098250
Author(s):  
Jennifer K. Foley ◽  
Kristina D. Michaux ◽  
Bho Mudyahoto ◽  
Laira Kyazike ◽  
Binu Cherian ◽  
...  

Background: Micronutrient deficiencies affect over one quarter of the world’s population. Biofortification is an evidence-based nutrition strategy that addresses some of the most common and preventable global micronutrient gaps and can help improve the health of millions of people. Since 2013, HarvestPlus and a consortium of collaborators have made impressive progress in the enrichment of staple crops with essential micronutrients through conventional plant breeding. Objective: To review and highlight lessons learned from multiple large-scale delivery strategies used by HarvestPlus to scale up biofortification across different country and crop contexts. Results: India has strong public and private sector pearl millet breeding programs and a robust commercial seed sector. To scale-up pearl millet, HarvestPlus established partnerships with public and private seed companies, which facilitated the rapid commercialization of products and engagement of farmers in delivery activities. In Nigeria, HarvestPlus stimulated the initial acceptance and popularization of vitamin A cassava using a host of creative approaches, including “crowding in” delivery partners, innovative promotional programs, and development of intermediate raw material for industry and novel food products. In Uganda, orange sweet potato (OSP) is a traditional subsistence crop. Due to this, and the lack of formal seed systems and markets, HarvestPlus established a network of partnerships with community-based nongovernmental organizations and vine multipliers to popularize and scale-up delivery of OSP. Conclusions: Impact of biofortification ultimately depends on the development of sustainable markets for biofortified seeds and products. Results illustrate the need for context-specific, innovative solutions to promote widespread adoption.


Sign in / Sign up

Export Citation Format

Share Document