Quantifying Uncertainty for the PUNQ-S3 Problem in a Bayesian Setting With RML and EnKF

SPE Journal ◽  
2006 ◽  
Vol 11 (04) ◽  
pp. 506-515 ◽  
Author(s):  
Guohua Gao ◽  
Mohammad Zafari ◽  
Albert C. Reynolds

Summary The well known PUNQ-S3 reservoir model represents a synthetic problem which was formulated to test the ability of various methods and research groups to quantify the uncertainty in the prediction of cumulative oil production. Previous results reported on this project suggest that the randomized maximum likelihood (RML) method gives a biased characterization of the uncertainty. A major objective of this paper is to show that this is incorrect. With a correct implementation of the RML method within a Bayesian framework, we show that RML does an adequate job of sampling the a posteriori distribution for the PUNQ problem. In particular, the true predicted oil production lies within the band of predictions generated with the RML method and is not biased. We also apply the ensemble Kalman Filter (EnKF) method to the PUNQ data set, and show that this method also gives a reasonable quantification of the uncertainty in performance predictions with an uncertainty range similar to the one obtained with RML. Introduction We consider conditioning models to production data in a Bayesian framework and wish to generate a suite (ensemble) of models which represent a correct sampling of the conditional probability density function (pdf). By predicting future reservoir performance with each realization, we obtain a characterization of the uncertainty in predicted performance. Both the rejection algorithm and Markov chain Monte Carlo (MCMC) are theoretically sound sampling procedures, but they are too computationally inefficient for practical applications (Liu and Oliver 2003). Oliver et al. (1996) and Kitanidis (1986) independently proposed the randomized maximum likelihood (RML) method to generate an approximate sampling of the a posteriori pdf. Two different proofs (Oliver 1996; Reynolds et al. 1999) have been presented which show that the RML method samples the posterior probability density function (pdf) correctly if data are linearly related to the model; however, no rigorous theoretical foundation exists for the method when the relation between data and model is nonlinear, which is the case when the data represent production data. Computational results indicate that the RML method generates reasonable characterization of uncertainty for single-phase flow (Oliver et al. 1996; Reynolds et al. 1999; Liu and Oliver 2003). Our first objective is to show that, contrary to a previous claim (Floris 2001), RML gives a reasonable characterization of the uncertainty in predicted performance for the PUNQ-S3 problem; our second objective is to compare the quantification of uncertainty obtained with RML with the one obtained with the ensemble Kalman filter (EnKF). The PUNQ-S3 reservoir represents a synthetic model based on an actual reservoir (Floris et al. 2001; Barker et al. 2001). The problem was set up as a test case to allow various research groups to test their own methodology for the characterization of the uncertainty in reservoir performance predictions given some geologic information on the reservoir, hard data at well gridblocks and some scattered production data from the first 8 years of production. Then participants were asked to predict cumulative oil production for 16.5 years of total production and characterize the uncertainty in this prediction.

SPE Journal ◽  
2007 ◽  
Vol 12 (03) ◽  
pp. 382-391 ◽  
Author(s):  
Mohammad Zafari ◽  
Albert Coburn Reynolds

Summary Recently, the ensemble Kalman Filter (EnKF) has gained popularity in atmospheric science for the assimilation of data and the assessment of uncertainty in forecasts for complex, large-scale problems. A handful of papers have discussed reservoir characterization applications of the EnKF, which can easily and quickly be coupled with any reservoir simulator. Neither adjoint code nor specific knowledge of simulator numerics is required for implementation of the EnKF. Moreover, data are assimilated (matched) as they become available; a suite of plausible reservoir models (the ensemble, set of ensemble members or suite or realizations) is continuously updated to honor data without rematching data assimilated previously. Because of these features, the method is far more efficient for history matching dynamic data than automatic history matching based on optimization algorithms. Moreover, the set of realizations provides a way to evaluate the uncertainty in reservoir description and performance predictions. Here we establish a firm theoretical relation between randomized maximum likelihood and the ensemble Kalman filter. Although we have previously generated reservoir characterization examples where the method worked well, here we also provide examples where the performance of EnKF does not provide a reliable characterization of uncertainty. Introduction Our main interest is in characterizing the uncertainty in reservoir description and reservoir performance predictions in order to optimize reservoir management. To do so, we wish to generate a suite of plausible reservoir models (realizations) that are consistent with all information and data. If the set of models is obtained by correctly sampling the pdf, then the set of models give a characterization of the uncertainty in the reservoir model. Thus, by predicting future reservoir performance with each of the realizations, and calculating statistics on the set of outcomes, one can evaluate the uncertainty in reservoir performance predictions.


2005 ◽  
Vol 8 (03) ◽  
pp. 214-223 ◽  
Author(s):  
Fengjun Zhang ◽  
Jan-Arild Skjervheim ◽  
Albert C. Reynolds ◽  
Dean S. Oliver

Summary The Bayesian framework allows one to integrate production and static data into an a posteriori probability density function (pdf) for reservoir variables(model parameters). The problem of generating realizations of the reservoir variables for the assessment of uncertainty in reservoir description or predicted reservoir performance then becomes a problem of sampling this a posteriori pdf to obtain a suite of realizations. Generation of a realization by the randomized-maximum-likelihood method requires the minimization of an objective function that includes production-data misfit terms and a model misfit term that arises from a prior model constructed from static data. Minimization of this objective function with an optimization algorithm is equivalent to the automatic history matching of production data, with a prior model constructed from static data providing regularization. Because of the computational cost of computing sensitivity coefficients and the need to solve matrix problems involving the covariance matrix for the prior model, this approach has not been applied to problems in which the number of data and the number of reservoir-model parameters are both large and the forward problem is solved by a conventional finite-difference simulator. In this work, we illustrate that computational efficiency problems can be overcome by using a scaled limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm to minimize the objective function and by using approximate computational stencils to approximate the multiplication of a vector by the prior covariance matrix or its inverse. Implementation of the LBFGS method requires only the gradient of the objective function, which can be obtained from a single solution of the adjoint problem; individual sensitivity coefficients are not needed. We apply the overall process to two examples. The first is a true field example in which a realization of log permeabilities at26,019 gridblocks is generated by the automatic history matching of pressure data, and the second is a pseudo field example that provides a very rough approximation to a North Sea reservoir in which a realization of log permeabilities at 9,750 gridblocks is computed by the automatic history matching of gas/oil ratio (GOR) and pressure data. Introduction The Bayes theorem provides a general framework for updating a pdf as new data or information on the model becomes available. The Bayesian setting offers a distinct advantage. If one can generate a suite of realizations that represent a correct sampling of the a posteriori pdf, then the suite of samples provides an assessment of the uncertainty in reservoir variables. Moreover, by predicting future reservoir performance under proposed operating conditions for each realization, one can characterize the uncertainty in future performance predictions by constructing statistics for the set of outcomes. Liu and Oliver have recently presented a comparison of methods for sampling the a posteriori pdf. Their results indicate that the randomized-maximum-likelihood method is adequate for evaluating uncertainty with a relatively limited number of samples. In this work, we consider the case in which a prior geostatistical model constructed from static data is available and is represented by a multivariate Gaussian pdf. Then, the a posteriori pdf conditional to production data is such that calculation of the maximum a posteriori estimate or generation of a realization by the randomized-maximum-likelihood method is equivalent to the minimization of an appropriate objective function. History-matching problems of interest to us involve a few thousand to tens of thousands of reservoir variables and a few hundred to a few thousand production data. Thus, an optimization algorithm suitable for large-scale problems is needed. Our belief is that nongradient-based algorithms such as simulated annealing and the genetic algorithm are not competitive with gradient-based algorithms in terms of computational efficiency. Classical gradient-based algorithms such as the Gauss-Newton and Levenberg-Marquardt typically converge fairly quickly and have been applied successfully to automatic history matching for both single-phase- and multiphase-flow problems. No multiphase-flow example considered in these papers involved more than 1,500reservoir variables. For single-phase-flow problems, He et al. and Reynolds et al. have generated realizations of models involving up to 12,500 reservoir variables by automatic history matching of pressure data. However, they used a procedure based on their generalization of the method of Carter et al. to calculate sensitivity coefficients; this method assumes that the partial-differential equation solved by reservoir simulation is linear and does not apply for multiphase-flow problems.


SPE Journal ◽  
2018 ◽  
Vol 23 (02) ◽  
pp. 449-466 ◽  
Author(s):  
Siavash Hakim Elahi ◽  
Behnam Jafarpour

Summary Hydraulic fracturing is performed to enable production from low-permeability and organic-rich shale-oil/gas reservoirs by stimulating the rock to increase its permeability. Characterization and imaging of hydraulically induced fractures is critical for accurate prediction of production and of the stimulated reservoir volume (SRV). Recorded tracer concentrations during flowback and historical production data can reveal important information about fracture and matrix properties, including fracture geometry, hydraulic conductivity, and natural-fracture density. However, the complexity and uncertainty in fracture and reservoir descriptions, coupled with data limitations, complicate the estimation of these properties. In this paper, tracer-test and production data are used for dynamic characterization of important parameters of hydraulically fractured reservoirs, including matrix permeability and porosity, planar-fracture half-length and hydraulic conductivity, discrete-fracture-network (DFN) density and conductivity, and fracture-closing (conductivity-decline) rate during production. The ensemble Kalman filter (EnKF) is used to update uncertain model parameters by sequentially assimilating first the tracer-test data and then the production data. The results indicate that the tracer-test and production data have complementary information for estimating fracture half-length and conductivity, with the former being more sensitive to hydraulic conductivity and the latter being more affected by fracture half-length. For characterization of DFN, a stochastic representation is adopted and the parameters of the stochastic model along with matrix and hydraulic-fracture properties are updated. Numerical examples are presented to investigate the sensitivity of the observed production and tracer-test data to fracture and matrix properties and to evaluate the EnKF performance in estimating these parameters.


Polymers ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 1686
Author(s):  
Andrey Galukhin ◽  
Roman Nosov ◽  
Ilya Nikolaev ◽  
Elena Melnikova ◽  
Daut Islamov ◽  
...  

A new rigid tricyanate ester consisting of seven conjugated aromatic units is synthesized, and its structure is confirmed by X-ray analysis. This ester undergoes thermally stimulated polymerization in a liquid state. Conventional and temperature-modulated differential scanning calorimetry techniques are employed to study the polymerization kinetics. A transition of polymerization from a kinetic- to a diffusion-controlled regime is detected. Kinetic analysis is performed by combining isoconversional and model-based computations. It demonstrates that polymerization in the kinetically controlled regime of the present monomer can be described as a quasi-single-step, auto-catalytic, process. The diffusion contribution is parameterized by the Fournier model. Kinetic analysis is complemented by characterization of thermal properties of the corresponding polymerization product by means of thermogravimetric and thermomechanical analyses. Overall, the obtained experimental results are consistent with our hypothesis about the relation between the rigidity and functionality of the cyanate ester monomer, on the one hand, and its reactivity and glass transition temperature of the corresponding polymer, on the other hand.


2020 ◽  
Vol 70 (6) ◽  
pp. 1275-1288
Author(s):  
Abd El-Mohsen Badawy ◽  
Miroslav Haviar ◽  
Miroslav Ploščica

AbstractThe notion of a congruence pair for principal MS-algebras, simpler than the one given by Beazer for K2-algebras [6], is introduced. It is proved that the congruences of the principal MS-algebras L correspond to the MS-congruence pairs on simpler substructures L°° and D(L) of L that were associated to L in [4].An analogy of a well-known Grätzer’s problem [11: Problem 57] formulated for distributive p-algebras, which asks for a characterization of the congruence lattices in terms of the congruence pairs, is presented here for the principal MS-algebras (Problem 1). Unlike a recent solution to such a problem for the principal p-algebras in [2], it is demonstrated here on the class of principal MS-algebras, that a possible solution to the problem, though not very descriptive, can be simple and elegant.As a step to a more descriptive solution of Problem 1, a special case is then considered when a principal MS-algebra L is a perfect extension of its greatest Stone subalgebra LS. It is shown that this is exactly when de Morgan subalgebra L°° of L is a perfect extension of the Boolean algebra B(L). Two examples illustrating when this special case happens and when it does not are presented.


Geosciences ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 28
Author(s):  
Gaetano Festa ◽  
Guido Maria Adinolfi ◽  
Alessandro Caruso ◽  
Simona Colombelli ◽  
Grazia De Landro ◽  
...  

Seismic sequences are a powerful tool to locally infer geometrical and mechanical properties of faults and fault systems. In this study, we provided detailed location and characterization of events of the 3–7 July 2020 Irpinia sequence (southern Italy) that occurred at the northern tip of the main segment that ruptured during the 1980 Irpinia earthquake. Using an autocorrelation technique, we detected more than 340 events within the sequence, with local magnitude ranging between −0.5 and 3.0. We thus provided double difference locations, source parameter estimation, and focal mechanisms determination for the largest quality events. We found that the sequence ruptured an asperity with a size of about 800 m, along a fault structure having a strike compatible with the one of the main segments of the 1980 Irpinia earthquake, and a dip of 50–55° at depth of 10.5–12 km and 60–65° at shallower depths (7.5–9 km). Low stress drop release (average of 0.64 MPa) indicates a fluid-driven initiation mechanism of the sequence. We also evaluated the performance of the earthquake early warning systems running in real-time during the sequence, retrieving a minimum size for the blind zone in the area of about 15 km.


Minerals ◽  
2020 ◽  
Vol 11 (1) ◽  
pp. 39
Author(s):  
Mariana Lemos ◽  
Teresa Valente ◽  
Paula Marinho Reis ◽  
Rita Fonseca ◽  
Itamar Delbem ◽  
...  

For more than 30 years, sulfide gold ores were treated in metallurgic plants located in Nova Lima, Minas Gerais, Brazil, and accumulated in the Cocoruto tailings dam. Both flotation and leaching tailings from a deactivated circuit, as well as roasted and leaching tailings from an ongoing plant, were studied for their acid mine drainage potential and elements’ mobility. Detailed characterization of both tailings types indicates the presence of fine-grain size material hosting substantial amounts of sulfides that exhibit distinct geochemical and mineralogical characteristics. The samples from the ongoing plant show high grades of Fe in the form of oxides, cyanide, and sulfates. Differently, samples from the old circuit shave higher average concentrations of Al (0.88%), Ca (2.4%), Mg (0.96%), and Mn (0.17%), present as silicates and carbonates. These samples also show relics of preserved sulfides, such as pyrite and pyrrhotite. Concentrations of Zn, Cu, Au, and As are higher in the tailings of the ongoing circuit, while Cr and Hg stand out in the tailings of the deactivated circuit. Although the obtained results show that the sulfide wastes do not tend to generate acid mine drainage, leaching tests indicate the possibility of mobilization of toxic elements, namely As and Mn in the old circuit, and Sb, As, Fe, Ni, and Se in the tailings of the plant that still works. This work highlights the need for proper management and control of tailing dams even in alkaline drainage environments such as the one of the Cocoruto dam. Furthermore, strong knowledge of the tailings’ dynamics in terms of geochemistry and mineralogy would be pivotal to support long-term decisions on wastes management and disposal.


Molecules ◽  
2021 ◽  
Vol 26 (13) ◽  
pp. 3842
Author(s):  
Alessandro D’Alessandro ◽  
Daniele Ballestrieri ◽  
Lorenzo Strani ◽  
Marina Cocchi ◽  
Caterina Durante

Basil is a plant known worldwide for its culinary and health attributes. It counts more than a hundred and fifty species and many more chemo-types due to its easy cross-breeds. Each species and each chemo-type have a typical aroma pattern and selecting the proper one is crucial for the food industry. Twelve basil varieties have been studied over three years (2018–2020), as have four different cuts. To characterize the aroma profile, nine typical basil flavour molecules have been selected using a gas chromatography–mass spectrometry coupled with an olfactometer (GC–MS/O). The concentrations of the nine selected molecules were measured by an ultra-fast CG e-nose and Principal Component Analysis (PCA) was applied to detect possible differences among the samples. The PCA results highlighted differences between harvesting years, mainly for 2018, whereas no observable clusters were found concerning varieties and cuts, probably due to the combined effects of the investigated factors. For this reason, the ANOVA Simultaneous Component Analysis (ASCA) methodology was applied on a balanced a posteriori designed dataset. All the considered factors and interactions were statistically significant (p < 0.05) in explaining differences between the basil aroma profiles, with more relevant effects of variety and year.


Sign in / Sign up

Export Citation Format

Share Document