A Comparison Study on Algorithms in Reservoir Automatic History Matching

2012 ◽  
Vol 518-523 ◽  
pp. 4376-4379
Author(s):  
Bao Yi Jiang ◽  
Zhi Ping Li

With the increase in computational capability, numerical reservoir simulation has become an essential tool for reservoir engineering. To minimize the objective function involved in the history matching procedure, we need to apply the optimization algorithms. This paper is based on the optimization algorithms used in automatic history matching.

2013 ◽  
Vol 748 ◽  
pp. 614-618
Author(s):  
Bao Yi Jiang ◽  
Zhi Ping Li ◽  
Cheng Wen Zhang ◽  
Xi Gang Wang

Numerical reservoir models are constructed from limited available static and dynamic data, and history matching is a process of changing model parameters to find a set of values that will yield a reservoir simulation prediction of data that matches the observed historical production data. To minimize the objective function involved in the history matching procedure, we need to apply the optimization algorithms. This paper is based on the optimization algorithms used in automatic history matching. Several optimization algorithms will be compared in this paper.


1972 ◽  
Vol 12 (06) ◽  
pp. 508-514 ◽  
Author(s):  
L. Kent Thomas ◽  
L.J. Hellums ◽  
G.M. Reheis

Abstract This paper presents a nonlinear optimization technique that automatically varies reservoir parameters to obtain a history match of held parameters to obtain a history match of held performance. The method is based on the classical performance. The method is based on the classical Gauss-Newton least-squares procedure. The range of each parameter is restricted by a box-type constraint and special provisions are included to handle highly nonlinear cases. Any combination of reservoir parameters may be used as the optimization variables and any set or sets of held data may be included in the match. Several history matches are presented, including examples from previous papers for comparison. In each of these examples, the technique presented here resulted in equivalent history matches in as few or fewer simulation runs. Introduction The history matching phase of reservoir simulations usually requires a trial-and-error procedure of adjusting various reservoir parameters procedure of adjusting various reservoir parameters and then calculating field performance. This procedure is continued until an acceptable match procedure is continued until an acceptable match between field and calculated performance has been obtained and can become quite tedious and time consuming, even with a small number of reservoir parameters, because of the interaction between the parameters, because of the interaction between the parameters and calculated performance. parameters and calculated performance. Recently various automatic or semiautomatic history-matching techniques have been introduced. Jacquard and Jain presented a technique based on a version of the method of steepest descent. They did not consider their method to be fully operational, however, due to the lack of experience with convergence. Jahns presented a method based on the Gauss-Newton equation with a stepwise solution for speeding convergence; but his procedure still required a large number of reservoir simulations to proceed to a solution. Coats et al. presented a proceed to a solution. Coats et al. presented a workable automatic history-matching procedure based on least-squares and linear programming. The method presented by Slater and Durrer is based on a gradient method and linear programming. In their paper they mention the difficulty of choosing a step paper they mention the difficulty of choosing a step size for their gradient method, especially for problems involving low values of porosity and problems involving low values of porosity and permeability. They also point out the need for a permeability. They also point out the need for a fairly small range on their reservoir description parameters for highly nonlinear problems. Thus, parameters for highly nonlinear problems. Thus, work in this area to date has resulted either in techniques based on a linear parameter-error dependence or in nonlinear techniques which require a considerable number of simulation runs. The method presented here is a nonlinear algorithm that will match both linear and nonlinear systems in a reasonable number of simulations. HISTORY MATCHING In a reservoir simulation, various performance data for the field, such as well pressures, gas-oil ratios, and water-oil ratios, are used as the basis for the match. During the matching of these performance data certain reservoir and fluid performance data certain reservoir and fluid parameters are assumed to be known while other parameters are assumed to be known while other less reliable data, forming the set (x1, x2...xn), are varied to achieve a match. The objective of the history-matching procedure presented in this paper is to minimize, in a presented in this paper is to minimize, in a least-squares sense, the error between the set of observed and calculated performance data, Fk(x1, x2... xn). SPEJ P. 508


1980 ◽  
Vol 20 (06) ◽  
pp. 521-532 ◽  
Author(s):  
A.T. Watson ◽  
J.H. Seinfeld ◽  
G.R. Gavalas ◽  
P.T. Woo

Abstract An automatic history-matching algorithm based onan optimal control approach has been formulated forjoint estimation of spatially varying permeability andporosity and coefficients of relative permeabilityfunctions in two-phase reservoirs. The algorithm usespressure and production rate data simultaneously. The performance of the algorithm for thewaterflooding of one- and two-dimensional hypotheticalreservoirs is examined, and properties associatedwith the parameter estimation problem are discussed. Introduction There has been considerable interest in thedevelopment of automatic history-matchingalgorithms. Most of the published work to date onautomatic history matching has been devoted tosingle-phase reservoirs in which the unknownparameters to be estimated are often the reservoirporosity (or storage) and absolute permeability (ortransmissibility). In the single-phase problem, theobjective function usually consists of the deviationsbetween the predicted and measured reservoirpressures at the wells. Parameter estimation, orhistory matching, in multiphase reservoirs isfundamentally more difficult than in single-phasereservoirs. The multiphase equations are nonlinear, and in addition to the porosity and absolutepermeability, the relative permeabilities of each phasemay be unknown and subject to estimation. Measurements of the relative rates of flow of oil, water, and gas at the wells also may be available forthe objective function. The aspect of the reservoir history-matchingproblem that distinguishes it from other parameterestimation problems in science and engineering is thelarge dimensionality of both the system state and theunknown parameters. As a result of this largedimensionality, computational efficiency becomes aprime consideration in the implementation of anautomatic history-matching method. In all parameterestimation methods, a trade-off exists between theamount of computation performed per iteration andthe speed of convergence of the method. Animportant saving in computing time was realized insingle-phase automatic history matching through theintroduction of optimal control theory as a methodfor calculating the gradient of the objective functionwith respect to the unknown parameters. Thistechnique currently is limited to first-order gradientmethods. First-order gradient methods generallyconverge more slowly than those of higher order.Nevertheless, the amount of computation requiredper iteration is significantly less than that requiredfor higher-order optimization methods; thus, first-order methods are attractive for automatic historymatching. The optimal control algorithm forautomatic history matching has been shown toproduce excellent results when applied to field problems. Therefore, the first approach to thedevelopment of a general automatic history-matchingalgorithm for multiphase reservoirs wouldseem to proceed through the development of anoptimal control approach for calculating the gradientof the objective function with respect to theparameters for use in a first-order method. SPEJ P. 521^


2005 ◽  
Vol 8 (03) ◽  
pp. 214-223 ◽  
Author(s):  
Fengjun Zhang ◽  
Jan-Arild Skjervheim ◽  
Albert C. Reynolds ◽  
Dean S. Oliver

Summary The Bayesian framework allows one to integrate production and static data into an a posteriori probability density function (pdf) for reservoir variables(model parameters). The problem of generating realizations of the reservoir variables for the assessment of uncertainty in reservoir description or predicted reservoir performance then becomes a problem of sampling this a posteriori pdf to obtain a suite of realizations. Generation of a realization by the randomized-maximum-likelihood method requires the minimization of an objective function that includes production-data misfit terms and a model misfit term that arises from a prior model constructed from static data. Minimization of this objective function with an optimization algorithm is equivalent to the automatic history matching of production data, with a prior model constructed from static data providing regularization. Because of the computational cost of computing sensitivity coefficients and the need to solve matrix problems involving the covariance matrix for the prior model, this approach has not been applied to problems in which the number of data and the number of reservoir-model parameters are both large and the forward problem is solved by a conventional finite-difference simulator. In this work, we illustrate that computational efficiency problems can be overcome by using a scaled limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm to minimize the objective function and by using approximate computational stencils to approximate the multiplication of a vector by the prior covariance matrix or its inverse. Implementation of the LBFGS method requires only the gradient of the objective function, which can be obtained from a single solution of the adjoint problem; individual sensitivity coefficients are not needed. We apply the overall process to two examples. The first is a true field example in which a realization of log permeabilities at26,019 gridblocks is generated by the automatic history matching of pressure data, and the second is a pseudo field example that provides a very rough approximation to a North Sea reservoir in which a realization of log permeabilities at 9,750 gridblocks is computed by the automatic history matching of gas/oil ratio (GOR) and pressure data. Introduction The Bayes theorem provides a general framework for updating a pdf as new data or information on the model becomes available. The Bayesian setting offers a distinct advantage. If one can generate a suite of realizations that represent a correct sampling of the a posteriori pdf, then the suite of samples provides an assessment of the uncertainty in reservoir variables. Moreover, by predicting future reservoir performance under proposed operating conditions for each realization, one can characterize the uncertainty in future performance predictions by constructing statistics for the set of outcomes. Liu and Oliver have recently presented a comparison of methods for sampling the a posteriori pdf. Their results indicate that the randomized-maximum-likelihood method is adequate for evaluating uncertainty with a relatively limited number of samples. In this work, we consider the case in which a prior geostatistical model constructed from static data is available and is represented by a multivariate Gaussian pdf. Then, the a posteriori pdf conditional to production data is such that calculation of the maximum a posteriori estimate or generation of a realization by the randomized-maximum-likelihood method is equivalent to the minimization of an appropriate objective function. History-matching problems of interest to us involve a few thousand to tens of thousands of reservoir variables and a few hundred to a few thousand production data. Thus, an optimization algorithm suitable for large-scale problems is needed. Our belief is that nongradient-based algorithms such as simulated annealing and the genetic algorithm are not competitive with gradient-based algorithms in terms of computational efficiency. Classical gradient-based algorithms such as the Gauss-Newton and Levenberg-Marquardt typically converge fairly quickly and have been applied successfully to automatic history matching for both single-phase- and multiphase-flow problems. No multiphase-flow example considered in these papers involved more than 1,500reservoir variables. For single-phase-flow problems, He et al. and Reynolds et al. have generated realizations of models involving up to 12,500 reservoir variables by automatic history matching of pressure data. However, they used a procedure based on their generalization of the method of Carter et al. to calculate sensitivity coefficients; this method assumes that the partial-differential equation solved by reservoir simulation is linear and does not apply for multiphase-flow problems.


PETRO ◽  
2018 ◽  
Vol 5 (1) ◽  
Author(s):  
Maria Irmina Widyastuti ◽  
Maman Djumantara

<p>Reservoir simulation is an area of reservoir engineering in which computer models are used to predict the flow of fluids through porous media. Reservoir simulation process starts with several steps; data preparation, model and grid construction, initialization, history matching and prediction. Initialization process is done for matching OOIP or total initial hydrocarbon which fill reservoir with hydrocarbon control volume with volumetric method.</p><p>To aim the best encouraging optimum data, the plant of developments of this field was predicted for 22 years( until December 2035). The Scenario consisted of five different variation. First one is basecase, second scenario is scenario 1 + workover, third scenario would be scenario 1 + infill wells, fourth scenario is scenario 1 + peripheral injection, and the last fifth scenario is scenario 1 + 5-spot injection pattern wells. From all of the scenarios planned, recovery from from each scenario varied, the results are 31.05% for the first scenario, 31.53%, for the second one, 34.12%, for the third, 33.75% for the fourth scenario, and 37.04% for the fifth scenario which is the last one.</p><p>Keywords: reservoir simulation,reservoir simulator, history matching</p>


2021 ◽  
Author(s):  
Mohamed Shams ◽  
Ahmed El-Banbi ◽  
M. Helmy Sayyouh

Abstract Bee colony optimization technique is a stochastic population-based optimization algorithm inspired by the natural optimization behavior shown by honey bees during searching for food. Bee colony optimization algorithm has been successfully applied to various real-world optimization problems mostly in routing, transportation, and scheduling fields. This paper introduces the bee colony optimization method as the optimization technique in reservoir engineering assisted history matching procedure. The superiority of the proposed optimization algorithm is validated by comparing its performance with two other advanced nature-inspired optimization techniques (genetic and particle swarm optimization algorithms) in three synthetic assisted history matching problems. In addition, this paper presents the application of the bee colony optimization technique in assisting the history match of a full field reservoir simulation model of a mature gas-cap reservoir with 28 years of history. The resultant history matched model is compared with those obtained using a manual history matching procedure and using the most widely applied optimization algorithm used in assisted history matching commercial software tools. The results of this work indicate that employing the bee colony algorithm as the optimization technique in the assisted history matching workflow yields noticeable enhancement in terms of match quality and time required to achieve a reasonable match.


Sign in / Sign up

Export Citation Format

Share Document