Using Bayesian Model Probability for Ranking Different Prior Scenarios in Reservoir History Matching

SPE Journal ◽  
2019 ◽  
Vol 24 (04) ◽  
pp. 1490-1507 ◽  
Author(s):  
Sigurd Ivar Aanonsen ◽  
Svenn Tveit ◽  
Mathias Alerini

Summary This paper considers Bayesian methods to discriminate between models depending on posterior model probability. When applying ensemble-based methods for model updating or history matching, the uncertainties in the parameters are typically assumed to be univariate Gaussian random fields. In reality, however, there often might be several alternative scenarios that are possible a priori. We take that into account by applying the concepts of model likelihood and model probability and suggest a method that uses importance sampling to estimate these quantities from the prior and posterior ensembles. In particular, we focus on the problem of conditioning a dynamic reservoir-simulation model to frequent 4D-seismic data (e.g., permanent-reservoir-monitoring data) by tuning the top reservoir surface given several alternative prior interpretations with uncertainty. However, the methodology can easily be applied to similar problems, such as fault location and reservoir compartmentalization. Although the estimated posterior model probabilities will be uncertain, the ranking of models according to estimated probabilities appears to be quite robust.

2019 ◽  
Vol 23 (6) ◽  
pp. 1331-1347 ◽  
Author(s):  
Miguel Alfonzo ◽  
Dean S. Oliver

Abstract It is common in ensemble-based methods of history matching to evaluate the adequacy of the initial ensemble of models through visual comparison between actual observations and data predictions prior to data assimilation. If the model is appropriate, then the observed data should look plausible when compared to the distribution of realizations of simulated data. The principle of data coverage alone is, however, not an effective method for model criticism, as coverage can often be obtained by increasing the variability in a single model parameter. In this paper, we propose a methodology for determining the suitability of a model before data assimilation, particularly aimed for real cases with large numbers of model parameters, large amounts of data, and correlated observation errors. This model diagnostic is based on an approximation of the Mahalanobis distance between the observations and the ensemble of predictions in high-dimensional spaces. We applied our methodology to two different examples: a Gaussian example which shows that our shrinkage estimate of the covariance matrix is a better discriminator of outliers than the pseudo-inverse and a diagonal approximation of this matrix; and an example using data from the Norne field. In this second test, we used actual production, repeat formation tester, and inverted seismic data to evaluate the suitability of the initial reservoir simulation model and seismic model. Despite the good data coverage, our model diagnostic suggested that model improvement was necessary. After modifying the model, it was validated against the observations and is now ready for history matching to production and seismic data. This shows that the proposed methodology for the evaluation of the adequacy of the model is suitable for large realistic problems.


2003 ◽  
Vol 10 (1) ◽  
pp. 15-25 ◽  
Author(s):  
M.W. Zehn ◽  
A. Saitov

Owing to manufacturing composite materials and others show considerable uncertainties in wall-thickness, fluctuations in material properties and other parameter, which are spatially distributed over the structure. These uncertainties have a random character and can therefore not being reduced by some kind of mesh refinement within the FE model. What we need is a suitable statistical approach to describe the parameter changing that holds for the statistics of the process and the correlation between the parameter spatially distributed over the structure. The paper presents a solution for a spatial correlated simulation of parameter distribution owing to the manufacturing process or other causes that is suitable to be included in the FEA. The parameter estimation methods used in updating algorithms for FE-models, depend on the choice of a priori to be determined weighting matrices. The weighting matrices are in most cases assumed by engineering judgement of the analyst carrying out the updating procedure and his assessment of uncertainty of parameters chosen and measured and calculated results. With the statistical description of the spatial distribution at hand, we can calculate a parameter weighting matrix for a Baysian estimator. Furthermore, it can be shown in principle that with model updating it is possible to improve the probabilistic parameter distribution itself.


SPE Journal ◽  
2010 ◽  
Vol 15 (04) ◽  
pp. 1077-1088 ◽  
Author(s):  
F.. Sedighi ◽  
K.D.. D. Stephen

Summary Seismic history matching is the process of modifying a reservoir simulation model to reproduce the observed production data in addition to information gained through time-lapse (4D) seismic data. The search for good predictions requires that many models be generated, particularly if there is an interaction between the properties that we change and their effect on the misfit to observed data. In this paper, we introduce a method of improving search efficiency by estimating such interactions and partitioning the set of unknowns into noninteracting subspaces. We use regression analysis to identify the subspaces, which are then searched separately but simultaneously with an adapted version of the quasiglobal stochastic neighborhood algorithm. We have applied this approach to the Schiehallion field, located on the UK continental shelf. The field model, supplied by the operator, contains a large number of barriers that affect flow at different times during production, and their transmissibilities are highly uncertain. We find that we can successfully represent the misfit function as a second-order polynomial dependent on changes in barrier transmissibility. First, this enables us to identify the most important barriers, and, second, we can modify their transmissibilities efficiently by searching subgroups of the parameter space. Once the regression analysis has been performed, we reduce the number of models required to find a good match by an order of magnitude. By using 4D seismic data to condition saturation and pressure changes in history matching effectively, we have gained a greater insight into reservoir behavior and have been able to predict flow more accurately with an efficient inversion tool. We can now determine unswept areas and make better business decisions.


Sign in / Sign up

Export Citation Format

Share Document