scholarly journals A learning-from-data approach with soft clustering and path relinking to the history-matching problem

2021 ◽  
Vol 11 (7) ◽  
pp. 3045-3077
Author(s):  
Cristina C. B. Cavalcante ◽  
Cid C. de Souza ◽  
Célio Maschio ◽  
Denis Schiozer ◽  
Anderson Rocha

AbstractHistory matching is an important reservoir engineering process whereby the values of uncertain attributes of a reservoir model are changed to find models that have a better chance of reproducing the performance of an actual reservoir. As a typical inverse and ill-posed problem, different combinations of reservoir uncertain attributes lead to equally well-matched models and the success of a history-matching approach is usually measured in terms of its ability to efficiently find multiple history-matched models inside the search space defined by the parameterization of the problem (multiple-matched models have a higher chance of better representing the reservoir performance forecast). While studies on history-matching approaches have produced remarkable progress over the last two decades, given the uniqueness of each reservoir’s history-matching problem, no strategy is proven effective for all cases, and finding alternative, efficient, and effective history-matching methodologies is still a research challenge. In this work, we introduce a learning-from-data approach with path relinking and soft clustering to the history-matching problem. The proposed algorithm is designed to learn the patterns of input attributes that are associated with good matching quality from the set of available solutions, and has two stages that handle different types of reservoir uncertain attributes. In each stage, the algorithm evaluates the data of all-available solutions continuously and, based on the acquired information, dynamically decides what needs to be changed, where the changes shall take place, and how such changes will occur in order to generate new (and hopefully better) solutions. We validate our approach using the UNISIM-I-H benchmark, a complex synthetic case constructed with real data from the Namorado Field, Campos Basin, Brazil. Experimental results indicate the potential of the proposed approach in finding models with significantly better history-matching quality. Considering a global misfit quality metric, the final best solutions found by our approach are up to 77% better than the corresponding initial best solutions in the datasets used in the experiments. Moreover, compared with previous work for the same benchmark, the proposed learning-from-data approach is competitive regarding the quality of solutions found and, above all, it offers a significant reduction (up to 30 × less) in the number of simulations.

Geophysics ◽  
2012 ◽  
Vol 77 (1) ◽  
pp. M1-M16 ◽  
Author(s):  
Juan Luis Fernández Martínez ◽  
Tapan Mukerji ◽  
Esperanza García Gonzalo ◽  
Amit Suman

History matching provides to reservoir engineers an improved spatial distribution of physical properties to be used in forecasting the reservoir response for field management. The ill-posed character of the history-matching problem yields nonuniqueness and numerical instabilities that increase with the reservoir complexity. These features might cause local optimization methods to provide unpredictable results not being able to discriminate among the multiple models that fit the observed data (production history). Also, the high dimensionality of the inverse problem impedes estimation of uncertainties using classical Markov-chain Monte Carlo methods. We attenuated the ill-conditioned character of this history-matching inverse problem by reducing the model complexity using a spatial principal component basis and by combining as observables flow production measurements and time-lapse seismic crosswell tomographic images. Additionally the inverse problem was solved in a stochastic framework. For this purpose, we used a family of particle swarm optimization (PSO) optimizers that have been deduced from a physical analogy of the swarm system. For a synthetic sand-and-shale reservoir, we analyzed the performance of the different PSO optimizers, both in terms of exploration and convergence rate for two different reservoir models with different complexity and under the presence of different levels of white Gaussian noise added to the synthetic observed data. We demonstrated that PSO optimizers have a very good convergence rate for this example, and provide in addition, approximate measures of uncertainty around the optimum facies model. The PSO algorithms are robust in presence of noise, which is always the case for real data.


2020 ◽  
Vol 94 ◽  
pp. 103767
Author(s):  
Cristina C.B. Cavalcante ◽  
Célio Maschio ◽  
Denis Schiozer ◽  
Anderson Rocha

Author(s):  
Prachi Agrawal ◽  
Talari Ganesh ◽  
Ali Wagdy Mohamed

AbstractThis article proposes a novel binary version of recently developed Gaining Sharing knowledge-based optimization algorithm (GSK) to solve binary optimization problems. GSK algorithm is based on the concept of how humans acquire and share knowledge during their life span. A binary version of GSK named novel binary Gaining Sharing knowledge-based optimization algorithm (NBGSK) depends on mainly two binary stages: binary junior gaining sharing stage and binary senior gaining sharing stage with knowledge factor 1. These two stages enable NBGSK for exploring and exploitation of the search space efficiently and effectively to solve problems in binary space. Moreover, to enhance the performance of NBGSK and prevent the solutions from trapping into local optima, NBGSK with population size reduction (PR-NBGSK) is introduced. It decreases the population size gradually with a linear function. The proposed NBGSK and PR-NBGSK applied to set of knapsack instances with small and large dimensions, which shows that NBGSK and PR-NBGSK are more efficient and effective in terms of convergence, robustness, and accuracy.


Author(s):  
Geir Evensen

AbstractIt is common to formulate the history-matching problem using Bayes’ theorem. From Bayes’, the conditional probability density function (pdf) of the uncertain model parameters is proportional to the prior pdf of the model parameters, multiplied by the likelihood of the measurements. The static model parameters are random variables characterizing the reservoir model while the observations include, e.g., historical rates of oil, gas, and water produced from the wells. The reservoir prediction model is assumed perfect, and there are no errors besides those in the static parameters. However, this formulation is flawed. The historical rate data only approximately represent the real production of the reservoir and contain errors. History-matching methods usually take these errors into account in the conditioning but neglect them when forcing the simulation model by the observed rates during the historical integration. Thus, the model prediction depends on some of the same data used in the conditioning. The paper presents a formulation of Bayes’ theorem that considers the data dependency of the simulation model. In the new formulation, one must update both the poorly known model parameters and the rate-data errors. The result is an improved posterior ensemble of prediction models that better cover the observations with more substantial and realistic uncertainty. The implementation accounts correctly for correlated measurement errors and demonstrates the critical role of these correlations in reducing the update’s magnitude. The paper also shows the consistency of the subspace inversion scheme by Evensen (Ocean Dyn. 54, 539–560 2004) in the case with correlated measurement errors and demonstrates its accuracy when using a “larger” ensemble of perturbations to represent the measurement error covariance matrix.


2004 ◽  
Author(s):  
Guohua Gao ◽  
Mohammad Zafari ◽  
A.C. Reynolds

2022 ◽  
Vol 13 (1) ◽  
pp. 0-0

This article proposes a novel binary version of recently developed Gaining-Sharing knowledge-based optimization algorithm (GSK) to solve binary optimization problems is proposed. GSK algorithm is based on the concept of how humans acquire and share knowledge during their life span. Discrete Binary version of GSK named novel binary Gaining Sharing knowledge-based optimization algorithm (DBGSK) depends on mainly two binary stages: binary junior gaining sharing stage and binary senior gaining sharing stage with knowledge factor 1. These two stages enable DBGSK for exploring and exploitation of the search space efficiently and effectively to solve problems in binary space.An improved scheduling of the technical counselling process for utilization of the electricity from solar energy power stations is introduced. The scheduling aims at achieving the best utilization of the available day time for the counselling group,n this regard,a new application problem is presented, which is called a Travelling Counselling Problem (TCP).A Nonlinear Binary Model is introduced with a real application


Author(s):  
M. Rajendra ◽  
K. Shankar

A novel two stage Improved Radial Basis Function (IRBF) neural network for the damage identification of a multimember structure in the frequency domain is presented. The improvement of the proposed IRBF network is carried out in two stages. Conventional RBF network is used in the first stage for preliminary damage prediction and in the second stage reduced search space moving technique is used to minimize the prediction error. The network is trained with fractional frequency change ratios (FFCs) and damage signature indices (DSIs) as effective input patterns and the corresponding damage severity values as output patterns. The patterns are searched at different damage levels by Latin hypercube sampling (LHS) technique. The performance of the novel IRBF method is compared with the conventional RBF and Genetic algorithm (GA) methods and it is found to be a good multiple member damage identification strategy in terms of accuracy and precision with less computational effort.


2006 ◽  
Author(s):  
Sergio Henrique Guerra Sousa ◽  
Celio Maschio ◽  
Denis Jose Schiozer

Geophysics ◽  
1991 ◽  
Vol 56 (8) ◽  
pp. 1244-1251 ◽  
Author(s):  
A. L. R. Rosa ◽  
T. J. Ulrych

The widespread occurrence of subtle trap accumulations offshore Brazil has led to the need for the development of a high resolution processing scheme that helps the delineation of these features. The process consists of three stages, the first of which is deterministic and stochastic deconvolution. The second stage is the deconvolution of the residual wavelet by means of spectral modeling. The last stage consists of the correction of the color of the reflectivity function using a model developed for the area. An important conclusion that is drawn from the model is that the acoustic impedance is not white. Rather it is as red as the corresponding reflectivity is blue. Successful results from the application of the proposed technique to real data indicate that the color compensation is of second order importance as compared with the first two stages of the proposed scheme.


Sign in / Sign up

Export Citation Format

Share Document