Population-Based Optimization Algorithms for History Matching and Uncertainty Quantification: PUNQ-S3

2011 ◽  
Vol 63 (04) ◽  
pp. 99-101
Author(s):  
Dennis Denney
SPE Journal ◽  
2012 ◽  
Vol 17 (03) ◽  
pp. 865-873 ◽  
Author(s):  
Asaad Abdollahzadeh ◽  
Alan Reynolds ◽  
Mike Christie ◽  
David Corne ◽  
Brian Davies ◽  
...  

Summary Prudent decision making in subsurface assets requires reservoir uncertainty quantification. In a typical uncertainty-quantification study, reservoir models must be updated using the observed response from the reservoir by a process known as history matching. This involves solving an inverse problem, finding reservoir models that produce, under simulation, a similar response to that of the real reservoir. However, this requires multiple expensive multiphase-flow simulations. Thus, uncertainty-quantification studies employ optimization techniques to find acceptable models to be used in prediction. Different optimization algorithms and search strategies are presented in the literature, but they are generally unsatisfactory because of slow convergence to the optimal regions of the global search space, and, more importantly, failure in finding multiple acceptable reservoir models. In this context, a new approach is offered by estimation-of-distribution algorithms (EDAs). EDAs are population-based algorithms that use models to estimate the probability distribution of promising solutions and then generate new candidate solutions. This paper explores the application of EDAs, including univariate and multivariate models. We discuss two histogram-based univariate models and one multivariate model, the Bayesian optimization algorithm (BOA), which employs Bayesian networks for modeling. By considering possible interactions between variables and exploiting explicitly stored knowledge of such interactions, EDAs can accelerate the search process while preserving search diversity. Unlike most existing approaches applied to uncertainty quantification, the Bayesian network allows the BOA to build solutions using flexible rules learned from the models obtained, rather than fixed rules, leading to better solutions and improved convergence. The BOA is naturally suited to finding good solutions in complex high-dimensional spaces, such as those typical in reservoir-uncertainty quantification. We demonstrate the effectiveness of EDA by applying the well-known synthetic PUNQ-S3 case with multiple wells. This allows us to verify the methodology in a well-controlled case. Results show better estimation of uncertainty when compared with some other traditional population-based algorithms.


2019 ◽  
Vol 2 (3) ◽  
pp. 508-517
Author(s):  
FerdaNur Arıcı ◽  
Ersin Kaya

Optimization is a process to search the most suitable solution for a problem within an acceptable time interval. The algorithms that solve the optimization problems are called as optimization algorithms. In the literature, there are many optimization algorithms with different characteristics. The optimization algorithms can exhibit different behaviors depending on the size, characteristics and complexity of the optimization problem. In this study, six well-known population based optimization algorithms (artificial algae algorithm - AAA, artificial bee colony algorithm - ABC, differential evolution algorithm - DE, genetic algorithm - GA, gravitational search algorithm - GSA and particle swarm optimization - PSO) were used. These six algorithms were performed on the CEC’17 test functions. According to the experimental results, the algorithms were compared and performances of the algorithms were evaluated.


Mathematics ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1190
Author(s):  
Mohammad Dehghani ◽  
Zeinab Montazeri ◽  
Štěpán Hubálovský

There are many optimization problems in the different disciplines of science that must be solved using the appropriate method. Population-based optimization algorithms are one of the most efficient ways to solve various optimization problems. Population-based optimization algorithms are able to provide appropriate solutions to optimization problems based on a random search of the problem-solving space without the need for gradient and derivative information. In this paper, a new optimization algorithm called the Group Mean-Based Optimizer (GMBO) is presented; it can be applied to solve optimization problems in various fields of science. The main idea in designing the GMBO is to use more effectively the information of different members of the algorithm population based on two selected groups, with the titles of the good group and the bad group. Two new composite members are obtained by averaging each of these groups, which are used to update the population members. The various stages of the GMBO are described and mathematically modeled with the aim of being used to solve optimization problems. The performance of the GMBO in providing a suitable quasi-optimal solution on a set of 23 standard objective functions of different types of unimodal, high-dimensional multimodal, and fixed-dimensional multimodal is evaluated. In addition, the optimization results obtained from the proposed GMBO were compared with eight other widely used optimization algorithms, including the Marine Predators Algorithm (MPA), the Tunicate Swarm Algorithm (TSA), the Whale Optimization Algorithm (WOA), the Grey Wolf Optimizer (GWO), Teaching–Learning-Based Optimization (TLBO), the Gravitational Search Algorithm (GSA), Particle Swarm Optimization (PSO), and the Genetic Algorithm (GA). The optimization results indicated the acceptable performance of the proposed GMBO, and, based on the analysis and comparison of the results, it was determined that the GMBO is superior and much more competitive than the other eight algorithms.


2013 ◽  
Vol 50 ◽  
pp. 4-15 ◽  
Author(s):  
D. Arnold ◽  
V. Demyanov ◽  
D. Tatum ◽  
M. Christie ◽  
T. Rojas ◽  
...  

Energies ◽  
2021 ◽  
Vol 14 (6) ◽  
pp. 1557
Author(s):  
Amine Tadjer ◽  
Reidar B. Bratvold

Carbon capture and storage (CCS) has been increasingly looking like a promising strategy to reduce CO2 emissions and meet the Paris agreement’s climate target. To ensure that CCS is safe and successful, an efficient monitoring program that will prevent storage reservoir leakage and drinking water contamination in groundwater aquifers must be implemented. However, geologic CO2 sequestration (GCS) sites are not completely certain about the geological properties, which makes it difficult to predict the behavior of the injected gases, CO2 brine leakage rates through wellbores, and CO2 plume migration. Significant effort is required to observe how CO2 behaves in reservoirs. A key question is: Will the CO2 injection and storage behave as expected, and can we anticipate leakages? History matching of reservoir models can mitigate uncertainty towards a predictive strategy. It could prove challenging to develop a set of history matching models that preserve geological realism. A new Bayesian evidential learning (BEL) protocol for uncertainty quantification was released through literature, as an alternative to the model-space inversion in the history-matching approach. Consequently, an ensemble of previous geological models was developed using a prior distribution’s Monte Carlo simulation, followed by direct forecasting (DF) for joint uncertainty quantification. The goal of this work is to use prior models to identify a statistical relationship between data prediction, ensemble models, and data variables, without any explicit model inversion. The paper also introduces a new DF implementation using an ensemble smoother and shows that the new implementation can make the computation more robust than the standard method. The Utsira saline aquifer west of Norway is used to exemplify BEL’s ability to predict the CO2 mass and leakages and improve decision support regarding CO2 storage projects.


Sign in / Sign up

Export Citation Format

Share Document