A New Parameterization Method for Large-Scale Reservoir History

2015 ◽  
Vol 733 ◽  
pp. 156-160
Author(s):  
Xia Yan ◽  
Jun Li ◽  
Hui Zhao

A novel and simple parameterization method using an ensemble of unconditional model realizations is applied to decrease the dimension of the misfit objective function in large-scale history matching problems. The major advantage of this parameterization method is that the singular value decomposition (SVD) calculation is completely avoided, which saves time and cost for huge matrix decomposition and the eigenvectors computations in parameterization process. After objective function transforms from a higher dimension to a lower dimension by parameterization, a Monte Carlo approach is introduced to evaluate the gradient information in the lower domain. Unlike the adjoint-gradient algorithms, the gradient in our method is estimated by Monte Carlo stochastic method, which can be easily coupled with different numerical simulator and avoid complicated adjoint code. When the estimated gradient information is obtained, any gradient-based algorithm can be implemented for optimizing the objective function. The Monte Carlo algorithm combined with the parameterization method is applied to Brugge reservoir field. The result shows that our present method gives a good estimation of reservoir properties and decreases the geological uncertainty without SVD but with a lower final objective function value, which provides a more efficient and useful way for history matching in large scale field.

SPE Journal ◽  
2007 ◽  
Vol 12 (02) ◽  
pp. 196-208 ◽  
Author(s):  
Guohua Gao ◽  
Gaoming Li ◽  
Albert Coburn Reynolds

Summary For large- scale history- matching problems, optimization algorithms which require only the gradient of the objective function and avoid explicit computation of the Hessian appear to be the best approach. Unfortunately, such algorithms have not been extensively used in practice because computation of the gradient of the objective function by the adjoint method requires explicit knowledge of the simulator numerics and expertise in simulation development. Here we apply the simultaneous perturbation stochastic approximation (SPSA) method to history match multiphase flow production data. SPSA, which has recently attracted considerable international attention in a variety of disciplines, can be easily combined with any reservoir simulator to do automatic history matching. The SPSA method uses stochastic simultaneous perturbation of all parameters to generate a down hill search direction at each iteration. The theoretical basis for this probabilistic perturbation is that the expectation of the search direction generated is the steepest descent direction. We present modifications for improvement in the convergence behavior of the SPSA algorithm for history matching and compare its performance to the steepest descent, gradual deformation and LBFGS algorithm. Although the convergence properties of the SPSA algorithm are not nearly as good as our most recent implementation of a quasi-Newton method (LBFGS), the SPSA algorithm is not simulator specific and it requires only a few hours of work to combine SPSA with any commercial reservoir simulator to do automatic history matching. To the best of our knowledge, this is the first introduction of SPSA into the history matching literature. Thus, we make considerable effort to put it in a proper context.


2019 ◽  
Author(s):  
Guohua Gao ◽  
Hao Jiang ◽  
Chaohui Chen ◽  
Jeroen C. Vink ◽  
Yaakoub El Khamra ◽  
...  

SPE Journal ◽  
2006 ◽  
Vol 11 (01) ◽  
pp. 5-17 ◽  
Author(s):  
Guohua Gao ◽  
Albert C. Reynolds

Summary For large scale history matching problems, where it is not feasible to compute individual sensitivity coefficients, the limited memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) is an efficient optimization algorithm, (Zhang and Reynolds, 2002; Zhang, 2002). However, computational experiments reveal that application of the original implementation of LBFGS may encounter the following problems:converge to a model which gives an unacceptable match of production data;generate a bad search direction that either leads to false convergence or a restart with the steepest descent direction which radically reduces the convergence rate;exhibit overshooting and undershooting, i.e., converge to a vector of model parameters which contains some abnormally high or low values of model parameters which are physically unreasonable. Overshooting and undershooting can occur even though all history matching problems are formulated in a Bayesian framework with a prior model providing regularization. We show that the rate of convergence and the robustness of the algorithm can be significantly improved by:a more robust line search algorithm motivated by the theoretical result that the Wolfe conditions should be satisfied;an application of a data damping procedure at early iterations orenforcing constraints on the model parameters. Computational experiments also indicate thata simple rescaling of model parameters prior to application of the optimization algorithm can improve the convergence properties of the algorithm although the scaling procedure used can not be theoretically validated. Introduction Minimization of a smooth objective function is customarily done using a gradient based optimization algorithm such as the Gauss- Newton (GN) method or Levenberg-Marquardt (LM) algorithm. The standard implementations of these algorithms (Tan and Kalogerakis, 1991; Wu et al., 1999; Li et al., 2003), however, require the computation of all sensitivity coefficients in order to formulate the Hessian matrix. We are interested in history matching problems where the number of data to be matched ranges from a few hundred to several thousand and the number of reservoir variables or model parameters to be estimated or simulated ranges from a few hundred to a hundred thousand or more. For the larger problems in this range, the computer resources required to compute all sensitivity coefficients would prohibit the use of the standard Gauss- Newton and Levenberg-Marquardt algorithms. Even for the smallest problems in this range, computation of all sensitivity coefficients may not be feasible as the resulting GN and LM algorithms may require the equivalent of several hundred simulation runs. The relative computational efficiency of GN, LM, nonlinear conjugate gradient and quasi-Newton methods have been discussed in some detail by Zhang and Reynolds (2002) and Zhang (2002).


1975 ◽  
Vol 15 (01) ◽  
pp. 19-38 ◽  
Author(s):  
Wen H. Chen ◽  
John H. Seinfeld

Abstract This paper considers the problem of estimating the shape of a petroleum reservoir on the basis of pressure data from wells within the boundaries of pressure data from wells within the boundaries of the reservoir. It is assumed that the reservoir properties, such as permeability and porosity, are properties, such as permeability and porosity, are known but that the location of the boundary is unknown. Thus, this paper addresses a new class of history-matching problems in which the boundary position is the reservoir property to be estimated. position is the reservoir property to be estimated. The problem is formulated as an optimal-control problem (the location of the boundary being the problem (the location of the boundary being the control variable). Two iterative methods are derived for the determination of the boundary location that minimizes a functional, depending on the deviation between observed and predicted pressures at the wells. The steepest-descent pressures at the wells. The steepest-descent algorithm is illustrated in two sample problems:the estimation of the radius of a bounded circular reservoir with a centrally located well, andthe estimation of the shape of a two-dimensional, single-phase reservoir with a constant-pressure outer boundary. Introduction A problem of substantial economic importance is the determination of the size and shape of a reservoir. Seismic data serve to define early the probable area occupied by the reservoir; however, probable area occupied by the reservoir; however, a means of using initial well-pressure data to determine further the volume and shape of the reservoir would be valuable. On the basis of representing the pressure behavior in a single-phase bounded reservoir in terms of an eigenfunction expansion, Gavalas and Seinfeld have shown how the total pore volume of an arbitrarily shaped reservoir can be estimated from late transient pressure data at the completed wells. We consider pressure data at the completed wells. We consider here the related problem of the estimation of the shape (or the location of the boundary) of a reservoir from pressure data at an arbitrary number of wells. For reasons of economy, the time allowable for closing wells is limited. It is important, therefore, that any method developed for estimating the shape of a reservoir be applicable, in principle, from the time at which the wells are completed until the current time. Thus, the problem we consider here may be viewed as one in the general realm of history matching, but also one in which the boundary location is the property to be estimated rather than the reserved physical properties. The formulation in the present study assumes that everything is known about the reservoir except its boundary. In actual practice, the reverse is generally true. (By the time sufficient information is available regarding the spatial distribution of permeability and porosity, the boundaries may be fairly well known.) Nevertheless, relatively early in the life of a reservoir, when initial drillstem tests have served to identify an approximate distribution of properties, it may be of some importance to attempt to estimate the reservoir shape. Since knowledge of reservoir properties such as permeability and porosity is at properties such as permeability and porosity is at best a result of initial estimates from well testing, core data, etc., the assumption that these properties are known will, of course, lead only to an approximate reservoir boundary. As the physical properties are identified more accurately, the reservoir boundary can be more accurately estimated. It is the object of this paper to formulate in a general manner and develop and initially test computational algorithms for the class of history-matching problems in which the boundary is the unknown property.There are virtually no prior available results on the estimation of the location of the boundary of a region over which the dependent variable(s) is governed by partial differential equations. The method developed here, based on the variation of a functional on a variable region, is applicable to a system governed by a set of nonlinear partial differential equations with general boundary conditions. The derivation of necessary conditions for optimality and the development of two computational gradient algorithms for determination of the optimal boundary are presented in the Appendix. To illustrate the steepest-descent algorithm we present two computational examples using simulated reservoir data. SPEJ P. 19


Author(s):  
E. A. Efimenko ◽  
M. Yu. Bekkiev ◽  
D. R. Mayilyan ◽  
A. S. Chepurnenko

Abstract. Aim. The purpose of the study is to determine the optimal location of supports used in the floor slab of an industrial building.Method. In order to determine the optimal arrangement of the columns, a Monte Carlo algorithm was used in combination with the finite element method. The calculation was carried out on the basis of the theory of elastic thin plates.Results. The article presents a solution to the problem of determining the optimal location of a given number of point-supports of a floor slab n from the condition of minimum objective function. For the objective function, the maximum deflection of the slab, the potential energy of deformation and the flow rate of reinforcement were selected as variables. The selection of reinforcement was carried out in accordance with current generally-accepted standards for the design of reinforced concrete structures. The calcu-lations were performed using a program developed by the authors in the MATLAB computing environment. The results are given for n = 3,4,5. The algorithm, which has been modified for a large num-ber of supports n, is presented alongside a comparison of the basic and modified algorithm with n = 25. The possibility of a significant reduction in plate deformations with an irregular arrangement of supports compared to a regular distribution is shown.Conclusion. A method is proposed for finding the rational locations of point supports for a floor slab for a given quantity from the condition of min-imum deflection, potential strain energy and consumption of reinforcement materials based on the Monte Carlo method. This technique is suitable for arbitrary slab configurations and arbitrary loads. A modification of the algorithm is presented that is suitable for a large number of supports. The test example shows that the maximum deflection can be reduced by 42% when using an irregular support configuration compared to regular column spacing. In the considered examples, the position of all the supports was previously considered unknown, but the developed algorithm easily allows for stationary supports, whose position does not change.


Materials ◽  
2020 ◽  
Vol 13 (17) ◽  
pp. 3696
Author(s):  
Artur Chrobak ◽  
Grzegorz Ziółkowski ◽  
Dariusz Chrobak ◽  
Grażyna Chełkowska

This paper refers to Monte Carlo magnetic simulations for large-scale systems. We propose scaling rules to facilitate analysis of mesoscopic objects using a relatively small amount of system nodes. In our model, each node represents a volume defined by an enlargement factor. As a consequence of this approach, the parameters describing magnetic interactions on the atomic level should also be re-scaled, taking into account the detailed thermodynamic balance as well as energetic equivalence between the real and re-scaled systems. Accuracy and efficiency of the model have been depicted through analysis of the size effects of magnetic moment configuration for various characteristic objects. As shown, the proposed scaling rules, applied to the disorder-based cluster Monte Carlo algorithm, can be considered suitable tools for designing new magnetic materials and a way to include low-level or first principle calculations in finite element Monte Carlo magnetic simulations.


SPE Journal ◽  
2017 ◽  
Vol 22 (06) ◽  
pp. 1999-2011 ◽  
Author(s):  
Guohua Gao ◽  
Hao Jiang ◽  
Paul van Hagen ◽  
Jeroen C. Vink ◽  
Terence Wells

Summary Solving the Gauss-Newton trust-region subproblem (TRS) with traditional solvers involves solving a symmetric linear system with dimensions the same as the number of uncertain parameters, and it is extremely computational expensive for history-matching problems with a large number of uncertain parameters. A new trust-region (TR) solver is developed to save both memory usage and computational cost, and its performance is compared with the well-known direct TR solver using factorization and iterative TR solver using the conjugate-gradient approach. With application of the matrix inverse lemma, the original TRS is transformed to a new problem that involves solving a linear system with the number of observed data. For history-matching problems in which the number of uncertain parameters is much larger than the number of observed data, both memory usage and central-processing-unit (CPU) time can be significantly reduced compared with solving the original problem directly. An auto-adaptive power-law transformation technique is developed to transform the original strong nonlinear function to a new function that behaves more like a linear function. Finally, the Newton-Raphson method with some modifications is applied to solve the TRS. The proposed approach is applied to find best-match solutions in Bayesian-style assisted-history-matching (AHM) problems. It is first validated on a set of synthetic test problems with different numbers of uncertain parameters and different numbers of observed data. In terms of efficiency, the new approach is shown to significantly reduce both the computational cost and memory usage compared with the direct TR solver of the GALAHAD optimization library (see http://www.galahad.rl.ac.uk/doc.html). In terms of robustness, the new approach is able to reduce the risk of failure to find the correct solution significantly, compared with the iterative TR solver of the GALAHAD optimization library. Our numerical results indicate that the new solver can solve large-scale TRSs with reasonably small amounts of CPU time (in seconds) and memory (in MB). Compared with the CPU time and memory used for completing one reservoir simulation run for the same problem (in hours and in GB), the cost for finding the best-match parameter values using our new TR solver is negligible. The proposed approach has been implemented in our in-house reservoir simulation and history-matching system, and has been validated on a real-reservoir-simulation model. This illustrates the main result of this paper: the development of a robust Gauss-Newton TR approach, which is applicable for large-scale history-matching problems with negligible extra cost in CPU and memory.


Sign in / Sign up

Export Citation Format

Share Document