Unlocking Ensemble History Matching Potential with Parallelism and Careful Data Management

2021 ◽  
Author(s):  
Giorgio Fighera ◽  
Ernesto Della Rossa ◽  
Patrizia Anastasi ◽  
Mohammed Amr Aly ◽  
Tiziano Diamanti

Abstract Improvements in reservoir simulation computational time thanks to GPU-based simulators and the increasing computational power of modern HPC systems, are paving the way for a massive employment of Ensemble History Matching (EHM) techniques which are intrinsically parallel. Here we present the results of a comparative study between a newly developed EHM tool that aims at leveraging the GPU parallelism, and a commercial third-party EHM software as a benchmark. Both are tested on a real case. The reservoir chosen for the comparison has a production history of 3 years with 15 wells between oil producers, and water and gas injectors. The EHM algorithm used is the Ensemble Smoother with Multiple Data Assimilations (ESMDA) and both tools have access to the same computational resources. The EHM problem was stated in the same way for both tools. The objective function considers well oil productions, water cuts, bottom-hole pressures, and gas-oil-ratios. Porosity and horizontal permeability are used as 3D grid parameters in the update algorithm, along with nine scalar parameters for anisotropy ratios, Corey exponents, and fault transmissibility multipliers. Both the presented tool and the benchmark obtained a satisfactory history match quality. The benchmark tool took around 11.2 hours to complete, while the proposed tool took only 1.5 hours. The two tools performed similar updates on the scalar parameters with only minor discrepancies. Updates on the 3D grid properties instead show significant local differences. The updated ensemble for the benchmark reached extreme values for porosity and permeability which are also distributed in a heterogeneous way. These distributions are quite unlikely in some model regions given the initial geological characterization of the reservoir. The updated ensemble for the presented tool did not reach extreme values in neither porosity nor permeability. The resulting property distributions are not so far off from the ones of the initial ensemble, therefore we can conclude that we were able to successfully update the ensemble while persevering the geological characterization of the reservoir. Analysis suggests that this discrepancy is due to the different way by which our EHM code consider inactive cells in the grid update calculations compared to the benchmark highlighting the fact that statistics including inactive cells should be carefully managed to correctly preserve the geological distribution represented in the initial ensemble. The presented EHM tool was developed from scratch to be fully parallel and to leverage on the abundantly available computational resources. Moreover, the ESMDA implementation was tweaked to improve the reservoir update by carefully managing inactive cells. A comparison against a benchmark showed that the proposed EHM tool achieved similar history match quality while improving the computation time and the geological realism of the updated ensemble.

2021 ◽  
Author(s):  
Ali Al-Turki ◽  
Obai Alnajjar ◽  
Majdi Baddourah ◽  
Babatunde Moriwawon

Abstract The algorithms and workflows have been developed to couple efficient model parameterization with stochastic, global optimization using a Multi-Objective Genetic Algorithm (MOGA) for global history matching, and coupled with an advanced workflow for streamline sensitivity-based inversion for fine-tuning. During parameterization the low-rank subsets of most influencing reservoir parameters are identified and propagated to MOGA to perform the field-level history match. Data misfits between the field historical data and simulation data are calculated with multiple realizations of reservoir models that quantify and capture reservoir uncertainty. Each generation of the optimization algorithms reduces the data misfit relative to the previous iteration. This iterative process continues until a satisfactory field-level history match is reached or there are no further improvements. The fine-tuning process of well-connectivity calibration is then performed with a streamlined sensitivity-based inversion algorithm to locally update the model to reduce well-level mismatch. In this study, an application of the proposed algorithms and workflow is demonstrated for model calibration and history matching. The synthetic reservoir model used in this study is discretized into millions of grid cells with hundreds of producer and injector wells. It is designed to generate several decades of production and injection history to evaluate and demonstrate the workflow. In field-level history matching, reservoir rock properties (e.g., permeability, fault transmissibility, etc.) are parameterized to conduct the global match of pressure and production rates. Grid Connectivity Transform (GCT) was used and assessed to parameterize the reservoir properties. In addition, the convergence rate and history match quality of MOGA was assessed during the field (global) history matching. Also, the effectiveness of the streamline-based inversion was evaluated by quantifying the additional improvement in history matching quality per well. The developed parametrization and optimization algorithms and workflows revealed the unique features of each of the algorithms for model calibration and history matching. This integrated workflow has successfully defined and carried uncertainty throughout the history matching process. Following the successful field-level history match, the well-level history matching was conducted using streamline sensitivity-based inversion, which further improved the history match quality and conditioned the model to historical production and injection data. In general, the workflow results in enhanced history match quality in a shorter turnaround time. The geological realism of the model is retained for robust prediction and development planning.


2003 ◽  
Vol 125 (5) ◽  
pp. 839-844 ◽  
Author(s):  
Weixue Tian ◽  
Wilson K. S. Chiu

In the zonal method, considerable computational resources are needed to calculate the direct exchange areas (DEA) among the isothermal zones due to integrals with up to six dimensions, while strong singularities occur in the integrands when two zones are adjacent or overlaping (self-irradiation). A special transformation of variables to reduce a double integral into several single integrals is discussed in this paper. This technique was originally presented by Erkku (1959) for calculation of DEA using a uniform zone system in a cylindrical enclosure. However, nonuniform zones are needed for applications with large thermal gradients. Thus we extended this technique to calculate the DEA for non-uniform zones in an axisymmetrical cylinder system. A six-fold reduction in computational time was observed in calculating DEA compared with cases without a variable transformation. It is shown that accuracy and efficiency of estimation of radiation heat flux is improved when using a nonuniform zone system. Reasonable accuracy of all DEA are calculated without resorting to the conservative equations. Results compared well with analytical solutions and numerical results of previous researchers. This technique can be readily extended to rectangular enclosures with similar reduction in computation time expected.


2021 ◽  
Author(s):  
Manish Kumar Choudhary ◽  
Gaurav Mahanti ◽  
Yogesh Rana ◽  
Sai Venkata Garimella ◽  
Arfan Ali ◽  
...  

Abstract Field X is one of largest oil fields in Brunei producing since 1970's. The field consists of a large faulted anticlinal structure of shallow marine Miocene sediments. The field has over 500 compartments and is produced under waterflood since 1980's through 400+ conduits over 50 platforms. A comprehensive review of water injection performance was attempted in 2019 to assess remaining oil and identify infill opportunities. Large uncertainties in reservoir properties, connectivity and fluid contacts required that data across multiple disciplines is integrated to identify new opportunities. It was recognized early on that integrated analysis of surveillance data and production history over 40 years will be critical for understanding field performance. Hence, reviews were first initiated using sand maps and analytical techniques. Tracer surveys, reservoir pressures, salinity measurements, Production Logging Tool (PLT) were all analyzed to understand waterflood progression and to define connectivity scenarios. A complete review of well logs, core data from over 30 wells and outcrop studies was carried out as part of modelling workflow. This understanding was used to construct a new facies-based static model. In parallel, key dynamic inputs like PVT analysis reports and special core analysis studies were analyzed to update dynamic modelling components. Prior to initiating the full field model history matching, a comprehensive impact analysis of the key dynamic uncertainties i.e., Production allocation, connectivity and varying aquifer strength etc. were conducted. An Assisted History Matching (AHM) workflow was attempted, which helped in identifying high impacting inputs which could be varied for history matching. Adjoint techniques were also used to identify other plausible geological scenarios. The integrated review helped in identifying over 50 new opportunities which potentially can increase recovery by over 10%. The new static model identified upsides in Stock Tank Oil Initially in Place (STOIIP) which if realized could further increase ultimate recoverable. The use of AHM assisted in reducing iterations and achieve multiple history matched models, which can be used to quantify forecast uncertainty. The new opportunities have helped to revitalize the mature field and has potential to almost increase the production by over 50%. A dedicated team is now maturing these opportunities. The robust methodology of integrating surveillance data with simulation modelling as described in this paper is generic and could be useful in current day brown field development practices to serve as an effective and economic manner for sustaining oil production and maximizing ultimate recovery. It is essential that all surveillance and production history data are well analyzed together prior to attempting any detailed modelling exercise. New models should then be constructed which confirm to the surveillance information and capture reservoir uncertainties. In large oil fields with long production history with allocation uncertainties, it is always a challenge for a quantitative assessment of History match quality and infill well Ultimate Recovery (UR) estimations. Hence a composite History Match Quality Indicator (HMQI) was designed with an appropriate weightage of rate, cumulative & reservoir pressure mismatch, water breakthrough timing delays. Then HMQI parameter spatial variation maps were made for different zones over the entire field for understanding and appropriately discounting each infill well oil recovery. Also, it is critical that facies variation is properly captured in models to better understand waterfront movements and locate remaining oil. Dynamic modelling of mature field with long production history can be quite challenging on its own and it is imperative that new numerical techniques are used to increase efficiency.


2022 ◽  
Vol 16 (1) ◽  
pp. 0-0

Secure and efficient authentication mechanism becomes a major concern in cloud computing due to the data sharing among cloud server and user through internet. This paper proposed an efficient Hashing, Encryption and Chebyshev HEC-based authentication in order to provide security among data communication. With the formal and the informal security analysis, it has been demonstrated that the proposed HEC-based authentication approach provides data security more efficiently in cloud. The proposed approach amplifies the security issues and ensures the privacy and data security to the cloud user. Moreover, the proposed HEC-based authentication approach makes the system more robust and secured and has been verified with multiple scenarios. However, the proposed authentication approach requires less computational time and memory than the existing authentication techniques. The performance revealed by the proposed HEC-based authentication approach is measured in terms of computation time and memory as 26ms, and 1878bytes for 100Kb data size, respectively.


2010 ◽  
Vol 3 (6) ◽  
pp. 1555-1568 ◽  
Author(s):  
B. Mijling ◽  
O. N. E. Tuinder ◽  
R. F. van Oss ◽  
R. J. van der A

Abstract. The Ozone Profile Algorithm (OPERA), developed at KNMI, retrieves the vertical ozone distribution from nadir spectral satellite measurements of back scattered sunlight in the ultraviolet and visible wavelength range. To produce consistent global datasets the algorithm needs to have good global performance, while short computation time facilitates the use of the algorithm in near real time applications. To test the global performance of the algorithm we look at the convergence behaviour as diagnostic tool of the ozone profile retrievals from the GOME instrument (on board ERS-2) for February and October 1998. In this way, we uncover different classes of retrieval problems, related to the South Atlantic Anomaly, low cloud fractions over deserts, desert dust outflow over the ocean, and the intertropical convergence zone. The influence of the first guess and the external input data including the ozone cross-sections and the ozone climatologies on the retrieval performance is also investigated. By using a priori ozone profiles which are selected on the expected total ozone column, retrieval problems due to anomalous ozone distributions (such as in the ozone hole) can be avoided. By applying the algorithm adaptations the convergence statistics improve considerably, not only increasing the number of successful retrievals, but also reducing the average computation time, due to less iteration steps per retrieval. For February 1998, non-convergence was brought down from 10.7% to 2.1%, while the mean number of iteration steps (which dominates the computational time) dropped 26% from 5.11 to 3.79.


Geophysics ◽  
2013 ◽  
Vol 78 (1) ◽  
pp. V1-V9 ◽  
Author(s):  
Zhonghuan Chen ◽  
Sergey Fomel ◽  
Wenkai Lu

When plane-wave destruction (PWD) is implemented by implicit finite differences, the local slope is estimated by an iterative algorithm. We propose an analytical estimator of the local slope that is based on convergence analysis of the iterative algorithm. Using the analytical estimator, we design a noniterative method to estimate slopes by a three-point PWD filter. Compared with the iterative estimation, the proposed method needs only one regularization step, which reduces computation time significantly. With directional decoupling of the plane-wave filter, the proposed algorithm is also applicable to 3D slope estimation. We present synthetic and field experiments to demonstrate that the proposed algorithm can yield a correct estimation result with shorter computational time.


Author(s):  
Jérôme Limido ◽  
Mohamed Trabia ◽  
Shawoon Roy ◽  
Brendan O’Toole ◽  
Richard Jennings ◽  
...  

A series of experiments were performed to study plastic deformation of metallic plates under hypervelocity impact at the University of Nevada, Las Vegas (UNLV) Center for Materials and Structures using a two-stage light gas gun. In these experiments, cylindrical Lexan projectiles were fired at A36 steel target plates with velocities range of 4.5–6.0 km/s. Experiments were designed to produce a front side impact crater and a permanent bulging deformation on the back surface of the target without inducing complete perforation of the plates. Free surface velocities from the back surface of target plate were measured using the newly developed Multiplexed Photonic Doppler Velocimetry (MPDV) system. To simulate the experiments, a Lagrangian-based smooth particle hydrodynamics (SPH) is typically used to avoid the problems associated with mesh instability. Despite their intrinsic capability for simulation of violent impacts, particle methods have a few drawbacks that may considerably affect their accuracy and performance including, lack of interpolation completeness, tensile instability, and existence of spurious pressure. Moreover, computational time is also a strong limitation that often necessitates the use of reduced 2D axisymmetric models. To address these shortcomings, IMPETUS Afea Solver® implemented a newly developed SPH formulation that can solve the problems regarding spurious pressures and tensile instability. The algorithm takes full advantage of GPU Technology for parallelization of the computation and opens the door for running large 3D models (20,000,000 particles). The combination of accurate algorithms and drastically reduced computation time now makes it possible to run a high fidelity hypervelocity impact model.


Jurnal INKOM ◽  
2014 ◽  
Vol 8 (1) ◽  
pp. 29 ◽  
Author(s):  
Arnida Lailatul Latifah ◽  
Adi Nurhadiyatna

This paper proposes parallel algorithms for precipitation of flood modelling, especially applied in spatial rainfall distribution. As an important input in flood modelling, spatial distribution of rainfall is always needed as a pre-conditioned model. In this paper two interpolation methods, Inverse distance weighting (IDW) and Ordinary kriging (OK) are discussed. Both are developed in parallel algorithms in order to reduce the computational time. To measure the computation efficiency, the performance of the parallel algorithms are compared to the serial algorithms for both methods. Findings indicate that: (1) the computation time of OK algorithm is up to 23% longer than IDW; (2) the computation time of OK and IDW algorithms is linearly increasing with the number of cells/ points; (3) the computation time of the parallel algorithms for both methods is exponentially decaying with the number of processors. The parallel algorithm of IDW gives a decay factor of 0.52, while OK gives 0.53; (4) The parallel algorithms perform near ideal speed-up.


2021 ◽  
Author(s):  
Brett W. Larsen ◽  
Shaul Druckmann

AbstractLateral and recurrent connections are ubiquitous in biological neural circuits. The strong computational abilities of feedforward networks have been extensively studied; on the other hand, while certain roles for lateral and recurrent connections in specific computations have been described, a more complete understanding of the role and advantages of recurrent computations that might explain their prevalence remains an important open challenge. Previous key studies by Minsky and later by Roelfsema argued that the sequential, parallel computations for which recurrent networks are well suited can be highly effective approaches to complex computational problems. Such “tag propagation” algorithms perform repeated, local propagation of information and were introduced in the context of detecting connectedness, a task that is challenging for feedforward networks. Here, we advance the understanding of the utility of lateral and recurrent computation by first performing a large-scale empirical study of neural architectures for the computation of connectedness to explore feedforward solutions more fully and establish robustly the importance of recurrent architectures. In addition, we highlight a tradeoff between computation time and performance and demonstrate hybrid feedforward/recurrent models that perform well even in the presence of varying computational time limitations. We then generalize tag propagation architectures to multiple, interacting propagating tags and demonstrate that these are efficient computational substrates for more general computations by introducing and solving an abstracted biologically inspired decision-making task. More generally, our work clarifies and expands the set of computational tasks that can be solved efficiently by recurrent computation, yielding hypotheses for structure in population activity that may be present in such tasks.Author SummaryLateral and recurrent connections are ubiquitous in biological neural circuits; intriguingly, this stands in contrast to the majority of current-day artificial neural network research which primarily uses feedforward architectures except in the context of temporal sequences. This raises the possibility that part of the difference in computational capabilities between real neural circuits and artificial neural networks is accounted for by the role of recurrent connections, and as a result a more detailed understanding of the computational role played by such connections is of great importance. Making effective comparisons between architectures is a subtle challenge, however, and in this paper we leverage the computational capabilities of large-scale machine learning to robustly explore how differences in architectures affect a network’s ability to learn a task. We first focus on the task of determining whether two pixels are connected in an image which has an elegant and efficient recurrent solution: propagate a connected label or tag along paths. Inspired by this solution, we show that it can be generalized in many ways, including propagating multiple tags at once and changing the computation performed on the result of the propagation. To illustrate these generalizations, we introduce an abstracted decision-making task related to foraging in which an animal must determine whether it can avoid predators in a random environment. Our results shed light on the set of computational tasks that can be solved efficiently by recurrent computation and how these solutions may appear in neural activity.


Author(s):  
Johannes Felix Simon Brachmann ◽  
Andreas Baumgartner ◽  
Peter Gege

The Calibration Home Base (CHB) is an optical laboratory designed for the calibration of imaging spectrometers for the VNIR/SWIR wavelength range. Radiometric, spectral and geometric calibration as well as the characterization of sensor signal dependency on polarization are realized in a precise and highly automated fashion. This allows to carry out a wide range of time consuming measurements in an ecient way. The implementation of ISO 9001 standards in all procedures ensures a traceable quality of results. Spectral measurements in the wavelength range 380–1000 nm are performed to a wavelength uncertainty of +- 0.1 nm, while an uncertainty of +-0.2 nm is reached in the wavelength range 1000 – 2500 nm. Geometric measurements are performed at increments of 1.7 µrad across track and 7.6 µrad along track. Radiometric measurements reach an absolute uncertainty of +-3% (k=1). Sensor artifacts, such as caused by stray light will be characterizable and correctable in the near future. For now, the CHB is suitable for the characterization of pushbroom sensors, spectrometers and cameras. However, it is planned to extend the CHBs capabilities in the near future such that snapshot hyperspectral imagers can be characterized as well. The calibration services of the CHB are open to third party customers from research institutes as well as industry.


Sign in / Sign up

Export Citation Format

Share Document