scholarly journals Choosing Objective Function for Conditioning on Production Data

Author(s):  
R. Hauge ◽  
O. J. Arntzen ◽  
H. Soleng
1999 ◽  
Vol 2 (05) ◽  
pp. 470-477 ◽  
Author(s):  
Daniel Rahon ◽  
Paul Francis Edoa ◽  
Mohamed Masmoudi

Summary This paper discusses a method which helps identify the geometry of geological features in an oil reservoir by history matching of production data. Following an initial study on single-phase flow and applied to well tests (Rahon, D., Edoa, P. F., and Masmoudi, M.: "Inversion of Geological Shapes in Reservoir Engineering Using Well Tests and History Matching of Production Data," paper SPE 38656 presented at the 1997 SPE Annual Technical Conference and Exhibition, San Antonio, Texas, 5–8 October.), the research presented here was conducted in a multiphase flow context. This method provides information on the limits of a reservoir being explored, the position and size of faults, and the thickness and dimensions of channels. The approach consists in matching numerical flow simulation results with production measurements. This is achieved by modifying the geometry of the geological model. The identification of geometric parameters is based on the solution of an inverse problem and boils down to minimizing an objective function integrating the production data. The minimization algorithm is rendered very efficient by calculating the gradients of the objective function with respect to perturbations of these geometric parameters. This leads to a better characterization of the shape, the dimension, and the position of sedimentary bodies. Several examples are presented in this paper, in particular, an application of the method in a two-phase water/oil case. Introduction A number of semiautomatic history matching techniques have been developed in recent years to assist the reservoir engineer in his reservoir characterization task. These techniques are generally based on the resolution of an inverse problem by the minimization of an objective function and require the use of a numerical simulator. The matching parameters of the inverse problem comprise two types of properties: petrophysical/porosity and permeability and geometric position, shape, and size of the sedimentary bodies present in the reservoir. To be efficient, minimization algorithms require the calculation of simulated production gradients with respect to matching parameters. Such gradients are usually calculated by deriving discrete state equations solved in the numerical simulator1–5 or by using a so-called adjoint-state method.6,7 Therefore, most of these gradient-based methods only allow the identification of petrophysical parameters which appear explicitly in the discrete equations of state. The case of geometric parameters is much more complex, as the gradients of the objective function with respect to these parameters cannot be determined directly from the flow equation. Recent works8–10 have handled this problem by defining geological objects using mathematical functions to describe porosity or permeability fields. But, generalizing these solutions to complex geological models remains difficult. The method proposed in this paper is well suited to complex geometries and heterogeneous environments. The history matching parameters are the geometric elements that describe the geological objects generated, for example, with a geomodeling tool. A complete description of the method with the calculation of the sensitivities was presented in Ref. 11, within the particular framework of single-phase flow adapted to well-test interpretations. In this paper we will introduce an extension of the method to multiphase equations in order to match production data. Several examples are presented, illustrating the efficiency of this technique in a two-phase context. Description of the Method The objective is to develop an automatic or semiautomatic history matching method which allows identification of geometric parameters that describe geological shapes using a numerical simulator. To be efficient, the optimization process requires the calculation of objective function gradients with respect to the parameters. With usual fluid flow simulators using a regular grid or corner point geometry, the conventional methods for calculating well response gradients on discrete equations are not readily usable when dealing with geometric parameters. These geometric parameters do not appear explicitly in the model equations. With these kinds of structured models the solution is to determine the expression of the sensitivities of the objective function in the continuous problem using mathematical theory and then to calculate a discrete set of gradients. Sensitivity Calculation. Here, we present a sensitivity calculation to the displacement of a geological body in a two-phase water/oil flow context. State Equations. Let ? be a two- or three-dimensional spatial field, with a boundary ? and let ]0,T[ be the time interval covering the pressure history. We assume that the capillary pressure is negligible. The pressure p and the water saturation S corresponding to a two-phase flow in the domain ? are governed by the following equations: ∂ ϕ ( p ) S ∂ t − ∇ . ( k k r o ( S ) μ o ∇ ( p + ρ o g z ) ) = q o ρ o , ∂ ϕ ( p ) S ∂ t − ∇ . ( k k r w ( S ) μ w ∇ ( p + ρ w g z ) ) = q w ρ w , ( x , y , z ) ∈ Ω , t ∈ ] 0 , T [ , ( 1 ) with a no-flux boundary condition on ? and an initial equilibrium condition


SPE Journal ◽  
2007 ◽  
Vol 12 (04) ◽  
pp. 408-419 ◽  
Author(s):  
Baoyan Li ◽  
Francois Friedmann

Summary History matching is an inverse problem in which an engineer calibrates key geological/fluid flow parameters by fitting a simulator's output to the real reservoir production history. It has no unique solution because of insufficient constraints. History-match solutions are obtained by searching for minima of an objective function below a preselected threshold value. Experimental design and response surface methodologies provide an efficient approach to build proxies of objective functions (OF) for history matching. The search for minima can then be easily performed on the proxies of OF as long as its accuracy is acceptable. In this paper, we first introduce a novel experimental design methodology for semi-automatically selecting the sampling points, which are used to improve the accuracy of constructed proxies of the nonlinear OF. This method is based on derivatives of constructed proxies. We propose an iterative procedure for history matching, applying this new design methodology. To obtain the global optima, the proxies of an objective function are initially constructed on the global parameter space. They are iteratively improved until adequate accuracy is achieved. We locate subspaces in the vicinity of the optima regions using a clustering technique to improve the accuracy of the reconstructed OF in these subspaces. We test this novel methodology and history-matching procedure with two waterflooded reservoir models. One model is the Imperial College fault model (Tavassoli et al. 2004). It contains a large bank of simulation runs. The other is a modified version of SPE9 (Killough 1995) benchmark problem. We demonstrate the efficiency of this newly developed history-matching technique. Introduction History matching (Eide et al. 1994; Landa and Güyagüler 2003) is an inverse problem in which an engineer calibrates key geological/fluid flow parameters of reservoirs by fitting a reservoir simulator's output to the real reservoir production history. It has no unique solution because of insufficient constraints. The traditional history matching is performed in a semi-empirical approach, which is based on the engineer's understanding of the field production behavior. Usually, the model parameters are adjusted using a one-factor-at-a-time approach. History matching can be very time consuming, because many simulation runs may be required for obtaining good fitting results. Attempts have been made to automate the history-matching process by using optimal control theory (Chen et al. 1974) and gradient techniques (Gomez et al. 2001). Also, design of experiment (DOE) and response surface methodologies (Eide et al. 1994; Box and Wilson 1987; Montgomery 2001; Box and Hunter 1957; Box and Wilson 1951; Damsleth et al. 1992; Egeland et al. 1992; Friedmann et al. 2003) (RSM) were introduced in the late 1990s to guide automatic history matching. The goal of these automatic methods is to achieve reasonably faster history-matching techniques than the traditional method. History matching is an optimization problem. The objective is to find the best of all possible sets of geological/fluid flow parameters to fit the production data of reservoirs. To assess the quality of the match, we define an OF (Atallah 1999). For history-matching problems, an objective function is usually defined as a distance (Landa and Güyagüler 2003) between a simulator's output and reservoir production data. History-matching solutions are obtained by searching for minima of the objective function. Experimental design and response surface methodologies provide an efficient approach to build up hypersurfaces (Kecman 2001) of objective functions (i.e., proxies of objective functions with a limited number of simulation runs for history matching). The search for minima can then be easily performed on these proxies as long as their accuracy is acceptable. The efficiency of this technique depends on constructing adequately accurate objective functions.


2005 ◽  
Vol 8 (03) ◽  
pp. 214-223 ◽  
Author(s):  
Fengjun Zhang ◽  
Jan-Arild Skjervheim ◽  
Albert C. Reynolds ◽  
Dean S. Oliver

Summary The Bayesian framework allows one to integrate production and static data into an a posteriori probability density function (pdf) for reservoir variables(model parameters). The problem of generating realizations of the reservoir variables for the assessment of uncertainty in reservoir description or predicted reservoir performance then becomes a problem of sampling this a posteriori pdf to obtain a suite of realizations. Generation of a realization by the randomized-maximum-likelihood method requires the minimization of an objective function that includes production-data misfit terms and a model misfit term that arises from a prior model constructed from static data. Minimization of this objective function with an optimization algorithm is equivalent to the automatic history matching of production data, with a prior model constructed from static data providing regularization. Because of the computational cost of computing sensitivity coefficients and the need to solve matrix problems involving the covariance matrix for the prior model, this approach has not been applied to problems in which the number of data and the number of reservoir-model parameters are both large and the forward problem is solved by a conventional finite-difference simulator. In this work, we illustrate that computational efficiency problems can be overcome by using a scaled limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm to minimize the objective function and by using approximate computational stencils to approximate the multiplication of a vector by the prior covariance matrix or its inverse. Implementation of the LBFGS method requires only the gradient of the objective function, which can be obtained from a single solution of the adjoint problem; individual sensitivity coefficients are not needed. We apply the overall process to two examples. The first is a true field example in which a realization of log permeabilities at26,019 gridblocks is generated by the automatic history matching of pressure data, and the second is a pseudo field example that provides a very rough approximation to a North Sea reservoir in which a realization of log permeabilities at 9,750 gridblocks is computed by the automatic history matching of gas/oil ratio (GOR) and pressure data. Introduction The Bayes theorem provides a general framework for updating a pdf as new data or information on the model becomes available. The Bayesian setting offers a distinct advantage. If one can generate a suite of realizations that represent a correct sampling of the a posteriori pdf, then the suite of samples provides an assessment of the uncertainty in reservoir variables. Moreover, by predicting future reservoir performance under proposed operating conditions for each realization, one can characterize the uncertainty in future performance predictions by constructing statistics for the set of outcomes. Liu and Oliver have recently presented a comparison of methods for sampling the a posteriori pdf. Their results indicate that the randomized-maximum-likelihood method is adequate for evaluating uncertainty with a relatively limited number of samples. In this work, we consider the case in which a prior geostatistical model constructed from static data is available and is represented by a multivariate Gaussian pdf. Then, the a posteriori pdf conditional to production data is such that calculation of the maximum a posteriori estimate or generation of a realization by the randomized-maximum-likelihood method is equivalent to the minimization of an appropriate objective function. History-matching problems of interest to us involve a few thousand to tens of thousands of reservoir variables and a few hundred to a few thousand production data. Thus, an optimization algorithm suitable for large-scale problems is needed. Our belief is that nongradient-based algorithms such as simulated annealing and the genetic algorithm are not competitive with gradient-based algorithms in terms of computational efficiency. Classical gradient-based algorithms such as the Gauss-Newton and Levenberg-Marquardt typically converge fairly quickly and have been applied successfully to automatic history matching for both single-phase- and multiphase-flow problems. No multiphase-flow example considered in these papers involved more than 1,500reservoir variables. For single-phase-flow problems, He et al. and Reynolds et al. have generated realizations of models involving up to 12,500 reservoir variables by automatic history matching of pressure data. However, they used a procedure based on their generalization of the method of Carter et al. to calculate sensitivity coefficients; this method assumes that the partial-differential equation solved by reservoir simulation is linear and does not apply for multiphase-flow problems.


2007 ◽  
Vol 10 (03) ◽  
pp. 233-240 ◽  
Author(s):  
Alberto Cominelli ◽  
Fabrizio Ferdinandi ◽  
Pierre Claude de Montleau ◽  
Roberto Rossi

Summary Reservoir management is based on the prediction of reservoir performance by means of numerical-simulation models. Reliable predictions require that the numerical model mimic the production history. Therefore, the numerical model is modified to match the production data. This process is termed history matching (HM). Form a mathematical viewpoint, HM is an optimization problem, where the target is to minimize an objective function quantifying the misfit between observed and simulated production data. One of the main problems in HM is the choice of an effective parameterization—a set of reservoir properties that can be plausibly altered to get a history-matched model. This issue is known as a parameter-identification problem, and its solution usually represents a significant step in HM projects. In this paper, we propose a practical implementation of a multiscale approach aimed at identifying effective parameterizations in real-life HM problems. The approach requires the availability of gradient simulators capable of providing the user with derivatives of the objective function with respect to the parameters at hand. Objective-function derivatives can then be used in a multiscale setting to define a sequence of richer and richer parameterizations. At each step of the sequence, the matching of the production data is improved by means of a gradient-based optimization. The methodology was validated on a synthetic case and was applied to history match the simulation model of a North Sea oil reservoir. The proposed methodology can be considered a practical solution for parameter-identification problems in many real cases until sound methodologies (primarily adaptive multiscale estimation of parameters) become available in commercial software programs. Introduction Predictions of reservoir behavior require the definition of subsurface properties at the scale of the simulation grid cells. At this scale, a reliable description of the porous media requires us to build a reservoir model by integrating all the available sources of data. By their nature, we can categorize the data as prior and production data. Prior data can be seen as "direct" measures or representations of the reservoir properties. Production data include flow measures collected at wells [e.g., water cut, gas/oil ratio (GOR) and shut-in pressure, and time-lapse seismic data]. Prior data are directly incorporated in the setup of the reservoir model, typically in the framework of well-established reservoir-characterization workflows.


2018 ◽  
Vol 141 (2) ◽  
Author(s):  
Zhaoqi Fan ◽  
Daoyong Yang ◽  
Di Chai ◽  
Xiaoli Li

The iterative ensemble smoother (IES) algorithm has been extensively used to implicitly and inversely determine model parameters by assimilating measured/reference production profiles. The performance of the IES algorithms is usually challenged due to the simultaneous assimilation of all production data and the multiple iterations required for handling the inherent nonlinearity between production profiles and model parameters. In this paper, a modified IES algorithm has been proposed and validated to improve the efficiency and accuracy of the IES algorithm with the standard test model (i.e., PUNQ-S3 model). More specifically, a recursive approach is utilized to optimize the screening process of damping factor for improving the efficiency of the IES algorithm without compromising of history matching performance because an inappropriate damping factor potentially yields more iterations and significantly increased computational expenses. In addition, a normalization method is proposed to revamp the sensitivity matrix by minimizing the data heterogeneity associated with the model parameter matrix and production data matrix in updating processes of the IES algorithm. The coefficients of relative permeability and capillary pressure are included in the model parameter matrix that is to be iteratively estimated by assimilating the reference production data (i.e., well bottomhole pressure (WBHP), gas-oil ratio, and water cut) of five production wells. Three scenarios are designed to separately demonstrate the competence of the modified IES algorithm by comparing the objective function reduction, history-matched production profile convergence, model parameters variance reduction, and the relative permeability and capillary pressure of each scenario. It has been found from the PUNQ-S3 model that the computational expenses can be reduced by 50% while comparing the modified and original IES algorithm. Also, the enlarged objective function reduction, improved history-matched production profile, and decreased model parameter variance have been achieved by using the modified IES algorithm, resulting in a further reduced deviation between the reference and the estimated relative permeability and capillary pressure in comparison to those obtained from the original IES algorithm. Consequently, the modified IES algorithm integrated with the recursive approach and normalization method has been substantiated to be robust and pragmatic for improving the performance of the IES algorithms in terms of reducing the computational expenses and improving the accuracy.


Author(s):  
Umeshkannan P ◽  
Muthurajan KG

The developed countries are consuming more amount of energy in all forms including electricity continuously with advanced technologies.  Developing  nation’s  energy usage trend rises quickly but very less in comparison with their population and  their  method of generating power is not  seems  to  be  as  advanced  as  developed  nations. The   objective   function   of   this   linear   programming model is to maximize the average efficiency of power generation inIndia for 2020 by giving preference to energy efficient technologies. This model is subjected to various constraints like potential, demand, running cost and Hydrogen / Carbon ratio, isolated load, emission and already installed capacities. Tora package is used to solve this linear program. Coal,  Gas,  Hydro  and  Nuclear  sources can are  supply around 87 %  of  power  requirement .  It’s concluded that we can produce power  at  overall  efficiency  of  37%  while  meeting  a  huge demand  of  13,00,000  GWh  of  electricity.  The objective function shows the scenario of highaverage efficiency with presence of 9% renewables. Maximum value   is   restricted   by   low   renewable   source’s efficiencies, emission constraints on fossil fuels and cost restriction on some of efficient technologies. This    model    shows    that    maximum    18%    of    total requirement   can   be   met   by   renewable itself which reduces average efficiency to 35.8%.   Improving technologies  of  renewable  sources  and  necessary  capacity addition  to  them in  regular  interval  will  enhance  their  role and existence against fossil fuels in future. The work involves conceptualizing, modeling, gathering information for data’s to be used in model for problem solving and presenting different scenarios for same objective.


Sign in / Sign up

Export Citation Format

Share Document