scholarly journals Keynote Presentation: Efficient and Neutral Uncertainty Quantification for Discrete-facies Reservoir Models

Author(s):  
K. Mosegaard ◽  
Y. Melnikova ◽  
K.S. Cordua
2015 ◽  
Vol 138 (1) ◽  
Author(s):  
Jihoon Park ◽  
Jeongwoo Jin ◽  
Jonggeun Choe

For decision making, it is crucial to have proper reservoir characterization and uncertainty assessment of reservoir performances. Since initial models constructed with limited data have high uncertainty, it is essential to integrate both static and dynamic data for reliable future predictions. Uncertainty quantification is computationally demanding because it requires a lot of iterative forward simulations and optimizations in a single history matching, and multiple realizations of reservoir models should be computed. In this paper, a methodology is proposed to rapidly quantify uncertainties by combining streamline-based inversion and distance-based clustering. A distance between each reservoir model is defined as the norm of differences of generalized travel time (GTT) vectors. Then, reservoir models are grouped according to the distances and representative models are selected from each group. Inversions are performed on the representative models instead of using all models. We use generalized travel time inversion (GTTI) for the integration of dynamic data to overcome high nonlinearity and take advantage of computational efficiency. It is verified that the proposed method gathers models with both similar dynamic responses and permeability distribution. It also assesses the uncertainty of reservoir performances reliably, while reducing the amount of calculations significantly by using the representative models.


SPE Journal ◽  
2009 ◽  
Vol 15 (01) ◽  
pp. 31-38 ◽  
Author(s):  
Linah Mohamed ◽  
Mike Christie ◽  
Vasily Demyanov

Summary History matching and uncertainty quantification are two important research topics in reservoir simulation currently. In the Bayesian approach, we start with prior information about a reservoir (e.g., from analog outcrop data) and update our reservoir models with observations (e.g., from production data or time-lapse seismic). The goal of this activity is often to generate multiple models that match the history and use the models to quantify uncertainties in predictions of reservoir performance. A critical aspect of generating multiple history-matched models is the sampling algorithm used to generate the models. Algorithms that have been studied include gradient methods, genetic algorithms, and the ensemble Kalman filter (EnKF). This paper investigates the efficiency of three stochastic sampling algorithms: Hamiltonian Monte Carlo (HMC) algorithm, Particle Swarm Optimization (PSO) algorithm, and the Neighbourhood Algorithm (NA). HMC is a Markov chain Monte Carlo (MCMC) technique that uses Hamiltonian dynamics to achieve larger jumps than are possible with other MCMC techniques. PSO is a swarm intelligence algorithm that uses similar dynamics to HMC to guide the search but incorporates acceleration and damping parameters to provide rapid convergence to possible multiple minima. NA is a sampling technique that uses the properties of Voronoi cells in high dimensions to achieve multiple history-matched models. The algorithms are compared by generating multiple history- matched reservoir models and comparing the Bayesian credible intervals (p10-p50-p90) produced by each algorithm. We show that all the algorithms are able to find equivalent match qualities for this example but that some algorithms are able to find good fitting models quickly, whereas others are able to find a more diverse set of models in parameter space. The effects of the different sampling of model parameter space are compared in terms of the p10-p50-p90 uncertainty envelopes in forecast oil rate. These results show that algorithms based on Hamiltonian dynamics and swarm intelligence concepts have the potential to be effective tools in uncertainty quantification in the oil industry.


2015 ◽  
Author(s):  
Hamidreza Hamdi ◽  
Yasin Hajizadeh ◽  
Mario Costa Sousa

SPE Journal ◽  
2016 ◽  
Vol 21 (04) ◽  
pp. 1192-1203 ◽  
Author(s):  
A.. Alkhatib ◽  
M.. Babaei

Summary Reservoir heterogeneity can be detrimental to the success of surfactant/polymer enhanced-oil-recovery (EOR) processes. Therefore, it is important to evaluate the effect of uncertainty in reservoir heterogeneity on the performance of surfactant/polymer EOR. Usually, a Monte Carlo sampling approach is used, in which a number of stochastic reservoir-model realizations are generated and then numerical simulation is performed to obtain a certain objective function, such as the recovery factor. However, Monte Carlo simulation (MCS) has a slow convergence rate and requires a large number of samples to produce accurate results. This can be computationally expensive when using large complex reservoir models. This study applies a multiscale approach to improve the efficiency of uncertainty quantification. This method is known as the multilevel Monte Carlo (MLMC) method. This method comprises performing a small number of expensive simulations on the fine-scale model and a large number of less-expensive simulations on coarser upscaled models, and then combining the results to produce the quantities of interest. The purpose of this method is to reduce computational cost while maintaining the accuracy of the fine-scale model. The results of this approach are compared with a reference MCS, assuming a large number of simulations on the fine-scale model. Other advantages of the MLMC method are its nonintrusiveness and its scalability to incorporate an increasing number of uncertainties. This study uses the MLMC method to efficiently quantify the effect of uncertainty in heterogeneity on the recovery factor of a chemical EOR process, specifically surfactant/polymer flooding. The permeability field is assumed to be the random input. This method is first demonstrated by use of a Gaussian 3D reservoir model. Different coarsening algorithms are used and compared, such as the renormalization method and the pressure-solver method (PSM). The results are compared with running Monte Carlo for the fine-scale model while equating the computational cost for the MLMC method. Both of these results are then compared with the reference case, which uses a large number of runs of the fine-scale model. The method is then extended to a channelized non-Gaussian generated 3D reservoir model incorporating multiphase upscaling The results show that it is possible to robustly quantify spatial uncertainty for a surfactant/polymer EOR process while greatly reducing the computational requirement, up to two orders of magnitude compared with traditional Monte Carlo for both the Gaussian and non-Gaussian reservoir models. The method can be easily extended to other EOR processes to quantify spatial uncertainty, such as carbon dioxide (CO2) EOR. Other possible extensions of this method are also discussed.


SPE Journal ◽  
2012 ◽  
Vol 17 (03) ◽  
pp. 865-873 ◽  
Author(s):  
Asaad Abdollahzadeh ◽  
Alan Reynolds ◽  
Mike Christie ◽  
David Corne ◽  
Brian Davies ◽  
...  

Summary Prudent decision making in subsurface assets requires reservoir uncertainty quantification. In a typical uncertainty-quantification study, reservoir models must be updated using the observed response from the reservoir by a process known as history matching. This involves solving an inverse problem, finding reservoir models that produce, under simulation, a similar response to that of the real reservoir. However, this requires multiple expensive multiphase-flow simulations. Thus, uncertainty-quantification studies employ optimization techniques to find acceptable models to be used in prediction. Different optimization algorithms and search strategies are presented in the literature, but they are generally unsatisfactory because of slow convergence to the optimal regions of the global search space, and, more importantly, failure in finding multiple acceptable reservoir models. In this context, a new approach is offered by estimation-of-distribution algorithms (EDAs). EDAs are population-based algorithms that use models to estimate the probability distribution of promising solutions and then generate new candidate solutions. This paper explores the application of EDAs, including univariate and multivariate models. We discuss two histogram-based univariate models and one multivariate model, the Bayesian optimization algorithm (BOA), which employs Bayesian networks for modeling. By considering possible interactions between variables and exploiting explicitly stored knowledge of such interactions, EDAs can accelerate the search process while preserving search diversity. Unlike most existing approaches applied to uncertainty quantification, the Bayesian network allows the BOA to build solutions using flexible rules learned from the models obtained, rather than fixed rules, leading to better solutions and improved convergence. The BOA is naturally suited to finding good solutions in complex high-dimensional spaces, such as those typical in reservoir-uncertainty quantification. We demonstrate the effectiveness of EDA by applying the well-known synthetic PUNQ-S3 case with multiple wells. This allows us to verify the methodology in a well-controlled case. Results show better estimation of uncertainty when compared with some other traditional population-based algorithms.


2001 ◽  
Vol 60 (3) ◽  
pp. 161-178 ◽  
Author(s):  
Jean A. Rondal

Predominantly non-etiological conceptions have dominated the field of mental retardation (MR) since the discovery of the genetic etiology of Down syndrome (DS) in the sixties. However, contemporary approaches are becoming more etiologically oriented. Important differences across MR syndromes of genetic origin are being documented, particularly in the cognition and language domains, differences not explicable in terms of psychometric level, motivation, or other dimensions. This paper highlights the major difficulties observed in the oral language development of individuals with genetic syndromes of mental retardation. The extent of inter- and within-syndrome variability are evaluated. Possible brain underpinnings of the behavioural differences are envisaged. Cases of atypically favourable language development in MR individuals are also summarized and explanatory variables discussed. It is suggested that differences in brain architectures, originating in neurological development and having genetic origins, may largely explain the syndromic as well as the individual within-syndrome variability documented. Lastly, the major implications of the above points for current debates about modularity and developmental connectionism are spelt out.


Sign in / Sign up

Export Citation Format

Share Document