A Gauss–Newton Trust Region Solver for Large Scale History Matching Problems

Author(s):  
Guohua Gao ◽  
Hao Jiang ◽  
Paul Van Hagen ◽  
Jeroen C. Vink ◽  
Terence Wells
SPE Journal ◽  
2017 ◽  
Vol 22 (06) ◽  
pp. 1999-2011 ◽  
Author(s):  
Guohua Gao ◽  
Hao Jiang ◽  
Paul van Hagen ◽  
Jeroen C. Vink ◽  
Terence Wells

Summary Solving the Gauss-Newton trust-region subproblem (TRS) with traditional solvers involves solving a symmetric linear system with dimensions the same as the number of uncertain parameters, and it is extremely computational expensive for history-matching problems with a large number of uncertain parameters. A new trust-region (TR) solver is developed to save both memory usage and computational cost, and its performance is compared with the well-known direct TR solver using factorization and iterative TR solver using the conjugate-gradient approach. With application of the matrix inverse lemma, the original TRS is transformed to a new problem that involves solving a linear system with the number of observed data. For history-matching problems in which the number of uncertain parameters is much larger than the number of observed data, both memory usage and central-processing-unit (CPU) time can be significantly reduced compared with solving the original problem directly. An auto-adaptive power-law transformation technique is developed to transform the original strong nonlinear function to a new function that behaves more like a linear function. Finally, the Newton-Raphson method with some modifications is applied to solve the TRS. The proposed approach is applied to find best-match solutions in Bayesian-style assisted-history-matching (AHM) problems. It is first validated on a set of synthetic test problems with different numbers of uncertain parameters and different numbers of observed data. In terms of efficiency, the new approach is shown to significantly reduce both the computational cost and memory usage compared with the direct TR solver of the GALAHAD optimization library (see http://www.galahad.rl.ac.uk/doc.html). In terms of robustness, the new approach is able to reduce the risk of failure to find the correct solution significantly, compared with the iterative TR solver of the GALAHAD optimization library. Our numerical results indicate that the new solver can solve large-scale TRSs with reasonably small amounts of CPU time (in seconds) and memory (in MB). Compared with the CPU time and memory used for completing one reservoir simulation run for the same problem (in hours and in GB), the cost for finding the best-match parameter values using our new TR solver is negligible. The proposed approach has been implemented in our in-house reservoir simulation and history-matching system, and has been validated on a real-reservoir-simulation model. This illustrates the main result of this paper: the development of a robust Gauss-Newton TR approach, which is applicable for large-scale history-matching problems with negligible extra cost in CPU and memory.


2019 ◽  
Author(s):  
Guohua Gao ◽  
Hao Jiang ◽  
Chaohui Chen ◽  
Jeroen C. Vink ◽  
Yaakoub El Khamra ◽  
...  

SPE Journal ◽  
2006 ◽  
Vol 11 (01) ◽  
pp. 5-17 ◽  
Author(s):  
Guohua Gao ◽  
Albert C. Reynolds

Summary For large scale history matching problems, where it is not feasible to compute individual sensitivity coefficients, the limited memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) is an efficient optimization algorithm, (Zhang and Reynolds, 2002; Zhang, 2002). However, computational experiments reveal that application of the original implementation of LBFGS may encounter the following problems:converge to a model which gives an unacceptable match of production data;generate a bad search direction that either leads to false convergence or a restart with the steepest descent direction which radically reduces the convergence rate;exhibit overshooting and undershooting, i.e., converge to a vector of model parameters which contains some abnormally high or low values of model parameters which are physically unreasonable. Overshooting and undershooting can occur even though all history matching problems are formulated in a Bayesian framework with a prior model providing regularization. We show that the rate of convergence and the robustness of the algorithm can be significantly improved by:a more robust line search algorithm motivated by the theoretical result that the Wolfe conditions should be satisfied;an application of a data damping procedure at early iterations orenforcing constraints on the model parameters. Computational experiments also indicate thata simple rescaling of model parameters prior to application of the optimization algorithm can improve the convergence properties of the algorithm although the scaling procedure used can not be theoretically validated. Introduction Minimization of a smooth objective function is customarily done using a gradient based optimization algorithm such as the Gauss- Newton (GN) method or Levenberg-Marquardt (LM) algorithm. The standard implementations of these algorithms (Tan and Kalogerakis, 1991; Wu et al., 1999; Li et al., 2003), however, require the computation of all sensitivity coefficients in order to formulate the Hessian matrix. We are interested in history matching problems where the number of data to be matched ranges from a few hundred to several thousand and the number of reservoir variables or model parameters to be estimated or simulated ranges from a few hundred to a hundred thousand or more. For the larger problems in this range, the computer resources required to compute all sensitivity coefficients would prohibit the use of the standard Gauss- Newton and Levenberg-Marquardt algorithms. Even for the smallest problems in this range, computation of all sensitivity coefficients may not be feasible as the resulting GN and LM algorithms may require the equivalent of several hundred simulation runs. The relative computational efficiency of GN, LM, nonlinear conjugate gradient and quasi-Newton methods have been discussed in some detail by Zhang and Reynolds (2002) and Zhang (2002).


2015 ◽  
Vol 733 ◽  
pp. 156-160
Author(s):  
Xia Yan ◽  
Jun Li ◽  
Hui Zhao

A novel and simple parameterization method using an ensemble of unconditional model realizations is applied to decrease the dimension of the misfit objective function in large-scale history matching problems. The major advantage of this parameterization method is that the singular value decomposition (SVD) calculation is completely avoided, which saves time and cost for huge matrix decomposition and the eigenvectors computations in parameterization process. After objective function transforms from a higher dimension to a lower dimension by parameterization, a Monte Carlo approach is introduced to evaluate the gradient information in the lower domain. Unlike the adjoint-gradient algorithms, the gradient in our method is estimated by Monte Carlo stochastic method, which can be easily coupled with different numerical simulator and avoid complicated adjoint code. When the estimated gradient information is obtained, any gradient-based algorithm can be implemented for optimizing the objective function. The Monte Carlo algorithm combined with the parameterization method is applied to Brugge reservoir field. The result shows that our present method gives a good estimation of reservoir properties and decreases the geological uncertainty without SVD but with a lower final objective function value, which provides a more efficient and useful way for history matching in large scale field.


SPE Journal ◽  
2019 ◽  
Vol 25 (01) ◽  
pp. 037-055
Author(s):  
Guohua Gao ◽  
Hao Jiang ◽  
Chaohui Chen ◽  
Jeroen C. Vink ◽  
Yaakoub El Khamra ◽  
...  

Summary It has been demonstrated that the Gaussian-mixture-model (GMM) fitting method can construct a GMM that more accurately approximates the posterior probability density function (PDF) by conditioning reservoir models to production data. However, the number of degrees of freedom (DOFs) for all unknown GMM parameters might become huge for large-scale history-matching problems. A new formulation of GMM fitting with a reduced number of DOFs is proposed in this paper to save memory use and reduce computational cost. The performance of the new method is benchmarked against other methods using test problems with different numbers of uncertain parameters. The new method performs more efficiently than the full-rank GMM fitting formulation, reducing the memory use and computational cost by a factor of 5 to 10. Although it is less efficient than the simple GMM approximation dependent on local linearization (L-GMM), it achieves much higher accuracy, reducing the error by a factor of 20 to 600. Finally, the new method together with the parallelized acceptance/rejection (A/R) algorithm is applied to a synthetic history-matching problem for demonstration.


2021 ◽  
Author(s):  
Mohammed Amr Aly ◽  
Patrizia Anastasi ◽  
Giorgio Fighera ◽  
Ernesto Della Rossa

Abstract Ensemble approaches are increasingly used for history matching also with large scale models. However, the iterative nature and the high computational resources required, demands careful and consistent parameterization of the initial ensemble of models, to avoid repeated and time-consuming attempts before an acceptable match is achieved. The objective of this work is to introduce ensemble-based data analytic techniques to validate the starting ensemble and early identify potential parameterization problems, with significant time saving. These techniques are based on the same definition of the mismatch between the initial ensemble simulation results and the historical data used by ensemble algorithms. In fact, a notion of distance among ensemble realizations can be introduced using the mismatch, opening the possibility to use statistical analytic techniques like Multi-Dimensional Scaling and Generalized Sensitivity. In this way a clear and immediate view of ensemble behavior can be quickly explored. Combining these views with advanced correlation analysis, a fast assessment of ensemble consistency with observed data and physical understanding of the reservoir is then possible. The application of the proposed methodology to real cases of ensemble history matching studies, shows that the approach is very effective in identifying if a specific initial ensemble has an adequate parameterization to start a successful computational loop of data assimilation. Insufficient variability, due to a poor capturing of the reservoir performance, can be investigated both at field and well scales by data analytics computations. The information contained in ensemble mismatches of relevant quantities like water-breakthrough and Gas-Oil-ratio is then evaluated in a systematic way. The analysis often reveals where and which uncertainties have not enough variability to explain historical data. It also allows to detect what is the role of apparently inconsistent parameters. In principle it is possible to activate the heavy iterative computation also with an initial ensemble where the analytics tools show potential difficulties and problems. However, experiences with large scale models point out that the possibility to obtain a good match in these situations is very low, leading to a time-consuming revision of the entire process. On the contrary, if the ensemble is validated, the iterative large-scale computations achieve a good calibration with a consistency that enables predictive ability. As a new interesting feature of the proposed methodology, ensemble advanced data analytics techniques are able to give clues and suggestions regarding which parameters could be source of potential history matching problems in advance. In this way it is possible anticipate directly on initial ensemble the uncertainties revision for example modifying ranges, introducing new parameters and better tuning other ensemble factors, like localization and observations tolerances that controls the ultimate match quality.


SPE Journal ◽  
2007 ◽  
Vol 12 (02) ◽  
pp. 196-208 ◽  
Author(s):  
Guohua Gao ◽  
Gaoming Li ◽  
Albert Coburn Reynolds

Summary For large- scale history- matching problems, optimization algorithms which require only the gradient of the objective function and avoid explicit computation of the Hessian appear to be the best approach. Unfortunately, such algorithms have not been extensively used in practice because computation of the gradient of the objective function by the adjoint method requires explicit knowledge of the simulator numerics and expertise in simulation development. Here we apply the simultaneous perturbation stochastic approximation (SPSA) method to history match multiphase flow production data. SPSA, which has recently attracted considerable international attention in a variety of disciplines, can be easily combined with any reservoir simulator to do automatic history matching. The SPSA method uses stochastic simultaneous perturbation of all parameters to generate a down hill search direction at each iteration. The theoretical basis for this probabilistic perturbation is that the expectation of the search direction generated is the steepest descent direction. We present modifications for improvement in the convergence behavior of the SPSA algorithm for history matching and compare its performance to the steepest descent, gradual deformation and LBFGS algorithm. Although the convergence properties of the SPSA algorithm are not nearly as good as our most recent implementation of a quasi-Newton method (LBFGS), the SPSA algorithm is not simulator specific and it requires only a few hours of work to combine SPSA with any commercial reservoir simulator to do automatic history matching. To the best of our knowledge, this is the first introduction of SPSA into the history matching literature. Thus, we make considerable effort to put it in a proper context.


2021 ◽  
Vol 11 (2) ◽  
pp. 563
Author(s):  
Tuong Phuoc Tho ◽  
Nguyen Truong Thinh

In construction, a large-scale 3D printing method for construction is used to build houses quickly, based on Computerized Aid Design. Currently, the construction industry is beginning to apply quite a lot of 3D printing technologies to create buildings that require a quick construction time and complex structures that classical methods cannot implement. In this paper, a Cable-Driven Parallel Robot (CDPR) is described for the 3D printing of concrete for building a house. The CDPR structures are designed to be suitable for 3D printing in a large workspace. A linear programming algorithm was used to quickly calculate the inverse kinematic problem with the force equilibrium condition for the moving platform; this method is suitable for the flexible configuration of a CDPR corresponding to the various spaces. Cable sagging was also analyzed by the Trust-Region-Dogleg algorithm to increase the accuracy of the inverse kinematic problem for controlling the robot to perform basic trajectory interpolation movements. The paper also covers the design and analysis of a concrete extruder for the 3D printing method. The analytical results are experimented with based on a prototype of the CDPR to evaluate the work ability and suitability of this design. The results show that this design is suitable for 3D printing in construction, with high precision and a stable trajectory printing. The robot configuration can be easily adjusted and calculated to suit the construction space, while maintaining rigidity as well as an adequate operating space. The actuators are compact, easy to disassemble and move, and capable of accommodating a wide variety of dimensions.


SPE Journal ◽  
2019 ◽  
Vol 24 (04) ◽  
pp. 1508-1525
Author(s):  
Mengbi Yao ◽  
Haibin Chang ◽  
Xiang Li ◽  
Dongxiao Zhang

Summary Naturally or hydraulically fractured reservoirs usually contain fractures at various scales. Among these fractures, large-scale fractures might strongly affect fluid flow, making them essential for production behavior. Areas with densely populated small-scale fractures might also affect the flow capacity of the region and contribute to production. However, because of limited information, locating each small-scale fracture individually is impossible. The coexistence of different fracture scales also constitutes a great challenge for history matching. In this work, an integrated approach is proposed to inverse model multiscale fractures hierarchically using dynamic production data. In the proposed method, a hybrid of an embedded discrete fracture model (EDFM) and a dual-porosity/dual-permeability (DPDP) model is devised to parameterize multiscale fractures. The large-scale fractures are explicitly modeled by EDFM with Hough-transform-based parameterization to maintain their geometrical details. For the area with densely populated small-scale fractures, a truncated Gaussian field is applied to capture its spatial distribution, and then the DPDP model is used to model this fracture area. After the parameterization, an iterative history-matching method is used to inversely model the flow in a fractured reservoir. Several synthetic cases, including one case with single-scale fractures and three cases with multiscale fractures, are designed to test the performance of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document