An Improved Implementation of the LBFGS Algorithm for Automatic History Matching

SPE Journal ◽  
2006 ◽  
Vol 11 (01) ◽  
pp. 5-17 ◽  
Author(s):  
Guohua Gao ◽  
Albert C. Reynolds

Summary For large scale history matching problems, where it is not feasible to compute individual sensitivity coefficients, the limited memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) is an efficient optimization algorithm, (Zhang and Reynolds, 2002; Zhang, 2002). However, computational experiments reveal that application of the original implementation of LBFGS may encounter the following problems:converge to a model which gives an unacceptable match of production data;generate a bad search direction that either leads to false convergence or a restart with the steepest descent direction which radically reduces the convergence rate;exhibit overshooting and undershooting, i.e., converge to a vector of model parameters which contains some abnormally high or low values of model parameters which are physically unreasonable. Overshooting and undershooting can occur even though all history matching problems are formulated in a Bayesian framework with a prior model providing regularization. We show that the rate of convergence and the robustness of the algorithm can be significantly improved by:a more robust line search algorithm motivated by the theoretical result that the Wolfe conditions should be satisfied;an application of a data damping procedure at early iterations orenforcing constraints on the model parameters. Computational experiments also indicate thata simple rescaling of model parameters prior to application of the optimization algorithm can improve the convergence properties of the algorithm although the scaling procedure used can not be theoretically validated. Introduction Minimization of a smooth objective function is customarily done using a gradient based optimization algorithm such as the Gauss- Newton (GN) method or Levenberg-Marquardt (LM) algorithm. The standard implementations of these algorithms (Tan and Kalogerakis, 1991; Wu et al., 1999; Li et al., 2003), however, require the computation of all sensitivity coefficients in order to formulate the Hessian matrix. We are interested in history matching problems where the number of data to be matched ranges from a few hundred to several thousand and the number of reservoir variables or model parameters to be estimated or simulated ranges from a few hundred to a hundred thousand or more. For the larger problems in this range, the computer resources required to compute all sensitivity coefficients would prohibit the use of the standard Gauss- Newton and Levenberg-Marquardt algorithms. Even for the smallest problems in this range, computation of all sensitivity coefficients may not be feasible as the resulting GN and LM algorithms may require the equivalent of several hundred simulation runs. The relative computational efficiency of GN, LM, nonlinear conjugate gradient and quasi-Newton methods have been discussed in some detail by Zhang and Reynolds (2002) and Zhang (2002).

2019 ◽  
Author(s):  
Guohua Gao ◽  
Hao Jiang ◽  
Chaohui Chen ◽  
Jeroen C. Vink ◽  
Yaakoub El Khamra ◽  
...  

Author(s):  
Xiaotian Hao ◽  
Junqi Jin ◽  
Jianye Hao ◽  
Jin Li ◽  
Weixun Wang ◽  
...  

Bipartite b-matching is fundamental in algorithm design, and has been widely applied into diverse applications, such as economic markets, labor markets, etc. These practical problems usually exhibit two distinct features: large-scale and dynamic, which requires the matching algorithm to be repeatedly executed at regular intervals. However, existing exact and approximate algorithms usually fail in such settings due to either requiring intolerable running time or too much computation resource. To address this issue, based on a key observation that the matching instances vary not too much, we propose NeuSearcher which leverage the knowledge learned from previously instances to solve new problem instances. Specifically, we design a multichannel graph neural network to predict the threshold of the matched edges, by which the search region could be significantly reduced. We further propose a parallel heuristic search algorithm to iteratively improve the solution quality until convergence. Experiments on both open and industrial datasets demonstrate that NeuSearcher can speed up 2 to 3 times while achieving exactly the same matching solution compared with the state-of-the-art approximation approaches.


SPE Journal ◽  
2017 ◽  
Vol 22 (06) ◽  
pp. 1999-2011 ◽  
Author(s):  
Guohua Gao ◽  
Hao Jiang ◽  
Paul van Hagen ◽  
Jeroen C. Vink ◽  
Terence Wells

Summary Solving the Gauss-Newton trust-region subproblem (TRS) with traditional solvers involves solving a symmetric linear system with dimensions the same as the number of uncertain parameters, and it is extremely computational expensive for history-matching problems with a large number of uncertain parameters. A new trust-region (TR) solver is developed to save both memory usage and computational cost, and its performance is compared with the well-known direct TR solver using factorization and iterative TR solver using the conjugate-gradient approach. With application of the matrix inverse lemma, the original TRS is transformed to a new problem that involves solving a linear system with the number of observed data. For history-matching problems in which the number of uncertain parameters is much larger than the number of observed data, both memory usage and central-processing-unit (CPU) time can be significantly reduced compared with solving the original problem directly. An auto-adaptive power-law transformation technique is developed to transform the original strong nonlinear function to a new function that behaves more like a linear function. Finally, the Newton-Raphson method with some modifications is applied to solve the TRS. The proposed approach is applied to find best-match solutions in Bayesian-style assisted-history-matching (AHM) problems. It is first validated on a set of synthetic test problems with different numbers of uncertain parameters and different numbers of observed data. In terms of efficiency, the new approach is shown to significantly reduce both the computational cost and memory usage compared with the direct TR solver of the GALAHAD optimization library (see http://www.galahad.rl.ac.uk/doc.html). In terms of robustness, the new approach is able to reduce the risk of failure to find the correct solution significantly, compared with the iterative TR solver of the GALAHAD optimization library. Our numerical results indicate that the new solver can solve large-scale TRSs with reasonably small amounts of CPU time (in seconds) and memory (in MB). Compared with the CPU time and memory used for completing one reservoir simulation run for the same problem (in hours and in GB), the cost for finding the best-match parameter values using our new TR solver is negligible. The proposed approach has been implemented in our in-house reservoir simulation and history-matching system, and has been validated on a real-reservoir-simulation model. This illustrates the main result of this paper: the development of a robust Gauss-Newton TR approach, which is applicable for large-scale history-matching problems with negligible extra cost in CPU and memory.


Author(s):  
Ali A. Abbasi ◽  
M. T. Ahmadian

In order to better understand the mechanical properties of biological cells, characterization and investigation of their material behavior is necessary. In this paper hyperelastic Neo-Hookean material is used to characterize the mechanical properties of mouse oocyte cell. It has been assumed that the cell behavior is continues, isotropic, nonlinear and homogenous material. Then, by matching the experimental data with finite element (FE) simulation result and using the Levenberg–Marquardt optimization algorithm, the nonlinear hyperelastic model parameters have been extracted. Experimental data of mouse oocyte captured from literatures. Advantage of the developed model is that it can be used to calculate accurate reaction force on surgical instrument or it can be used to compute deformation or force in virtual reality based medical simulations.


2019 ◽  
Vol 2019 ◽  
pp. 1-6
Author(s):  
Miao Liu ◽  
Zhenxing Sun ◽  
Yan-chang Liu ◽  
Cun Zhao

5G network is a heterogeneous large-scale network. Cognitive Radio (CR) technology can be used to realize selection based on the communication time, communication resources, and communication requirement so as to improve the system performance of the whole communication system. Cognitive Radio (CR) system based on wavelet packet owns better flexibility and bandwidth efficiency. The optimal wavelet packet filter optimization algorithm is proposed in the paper for guaranteeing un-licensed user’s (un-LU) data transmission rate and optimizing the performance of system. The intelligent search algorithm is used to obtain the optimal wavelet filter. The simulation results show that the Intercarrier Interference (ICI) and bit error rate (BER) performance of the new optimal wavelet filter algorithm without sacrificing any un-LU’s subcarriers is better than other three masking subcarriers algorithms.


Author(s):  
Ali A. Abbasi ◽  
M. T. Ahmadian

Analysis and investigation of the relation between different parts of biological cells such as biomembrane, cytoplasm and nucleus can help to better understand their behaviors and material properties. In this paper, first, the whole elastic properties of mouse oocyte and embryo cells have been computed by inverse finite element and Levenberg–Marquardt optimization algorithm and second, using the derived mechanical properties and the mechanical properties of its bio membrane from the literature, the mechanical properties of its cytoplasm has been characterized. It has been assumed that the cell behavior is as continues, isotropic, nonlinear and homogenous material for modeling. Matching the experimental forces with the forces from the finite element (FE) simulation by the Levenberg–Marquardt optimization algorithm, gives the nonlinear hyperelastic model parameters for the whole cell. Experimental data of mouse oocyte and embryo cells captured from the literatures.


2017 ◽  
Author(s):  
Guohua Gao ◽  
Hao Jiang ◽  
Paul Van Hagen ◽  
Jeroen C. Vink ◽  
Terence Wells

2015 ◽  
Vol 733 ◽  
pp. 156-160
Author(s):  
Xia Yan ◽  
Jun Li ◽  
Hui Zhao

A novel and simple parameterization method using an ensemble of unconditional model realizations is applied to decrease the dimension of the misfit objective function in large-scale history matching problems. The major advantage of this parameterization method is that the singular value decomposition (SVD) calculation is completely avoided, which saves time and cost for huge matrix decomposition and the eigenvectors computations in parameterization process. After objective function transforms from a higher dimension to a lower dimension by parameterization, a Monte Carlo approach is introduced to evaluate the gradient information in the lower domain. Unlike the adjoint-gradient algorithms, the gradient in our method is estimated by Monte Carlo stochastic method, which can be easily coupled with different numerical simulator and avoid complicated adjoint code. When the estimated gradient information is obtained, any gradient-based algorithm can be implemented for optimizing the objective function. The Monte Carlo algorithm combined with the parameterization method is applied to Brugge reservoir field. The result shows that our present method gives a good estimation of reservoir properties and decreases the geological uncertainty without SVD but with a lower final objective function value, which provides a more efficient and useful way for history matching in large scale field.


SPE Journal ◽  
2019 ◽  
Vol 25 (01) ◽  
pp. 037-055
Author(s):  
Guohua Gao ◽  
Hao Jiang ◽  
Chaohui Chen ◽  
Jeroen C. Vink ◽  
Yaakoub El Khamra ◽  
...  

Summary It has been demonstrated that the Gaussian-mixture-model (GMM) fitting method can construct a GMM that more accurately approximates the posterior probability density function (PDF) by conditioning reservoir models to production data. However, the number of degrees of freedom (DOFs) for all unknown GMM parameters might become huge for large-scale history-matching problems. A new formulation of GMM fitting with a reduced number of DOFs is proposed in this paper to save memory use and reduce computational cost. The performance of the new method is benchmarked against other methods using test problems with different numbers of uncertain parameters. The new method performs more efficiently than the full-rank GMM fitting formulation, reducing the memory use and computational cost by a factor of 5 to 10. Although it is less efficient than the simple GMM approximation dependent on local linearization (L-GMM), it achieves much higher accuracy, reducing the error by a factor of 20 to 600. Finally, the new method together with the parallelized acceptance/rejection (A/R) algorithm is applied to a synthetic history-matching problem for demonstration.


SPE Journal ◽  
2012 ◽  
Vol 17 (02) ◽  
pp. 402-417 ◽  
Author(s):  
A.A.. A. Awotunde ◽  
R.N.. N. Horne

Summary In history matching, one of the challenges in the use of gradient-based Newton algorithms (e.g., Gauss-Newton and Leven-berg-Marquardt) in solving the inverse problem is the huge cost associated with the computation of the sensitivity matrix. Although the Newton type of algorithm gives faster convergence than most other gradient-based inverse solution algorithms, its use is limited to small- and medium-scale problems in which the sensitivity coefficients are easily and quickly computed. Modelers often use less-efficient algorithms (e.g., conjugate-gradient and quasi-Newton) to model large-scale problems because these algorithms avoid the direct computation of sensitivity coefficients. To find a direction of descent, such algorithms often use less-precise curvature information that would be contained in the gradient of the objective function. Using a sensitivity matrix gives more-complete information about the curvature of the function; however, this comes with a significant computational cost for large-scale problems. An improved adjoint-sensitivity computation is presented for time-dependent partial-differential equations describing multiphase flow in hydrocarbon reservoirs. The method combines the wavelet parameterization of data space with adjoint-sensitivity formulation to reduce the cost of computing sensitivities. This reduction in cost is achieved by reducing the size of the linear system of equations that are typically solved to obtain the sensitivities. This cost-saving technique makes solving an inverse problem with algorithms (e.g., Levenberg-Marquardt and Gauss-Newton) viable for large multiphase-flow history-matching problems. The effectiveness of this approach is demonstrated for two numerical examples involving multiphase flow in a reservoir with several production and injection wells.


Sign in / Sign up

Export Citation Format

Share Document