Orthogonal wavelets and stochastic gradient-based algorithms

Author(s):  
S. Attallah ◽  
M. Najim
Navigation ◽  
2016 ◽  
Vol 63 (1) ◽  
pp. 39-52 ◽  
Author(s):  
Negin Sokhandan ◽  
Ali Broumandan ◽  
James T. Curran ◽  
Gérard Lachapelle

SPE Journal ◽  
2014 ◽  
Vol 19 (05) ◽  
pp. 873-890 ◽  
Author(s):  
Xia Yan ◽  
Albert C. Reynolds

Summary Optimization algorithms that incorporate a stochastic gradient [such as simultaneous-perturbation stochastic approximation (SPSA), simplex, and EnOpt] are easy to implement in conjunction with any reservoir simulator. However, for realistic problems, a stochastic gradient provides only a rough approximation of the true gradient, and, in particular, the angle between a stochastic gradient and the associated true gradient is typically far from zero even though a properly computed stochastic gradient usually represents an uphill direction. This paper develops a more robust optimization procedure by replacing the components of largest magnitude of the stochastic gradient with a finite-difference (FD) approximation of the pertinent partial derivatives. In essence, the objective of the method is to determine which components of the unknown true gradient are most important and then replace the corresponding components of the stochastic gradient with more-accurate FD approximations. This modified gradient can then be used in a gradient-based optimization algorithm to find the minimum or maximum of a given cost function. Our focus application is the estimation of optimal well controls, but it is clear that the method could also be used for other applications, including history matching.


2021 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Ruilin Li ◽  
Xin Wang ◽  
Hongyuan Zha ◽  
Molei Tao

<p style='text-indent:20px;'>Many Markov Chain Monte Carlo (MCMC) methods leverage gradient information of the potential function of target distribution to explore sample space efficiently. However, computing gradients can often be computationally expensive for large scale applications, such as those in contemporary machine learning. Stochastic Gradient (SG-)MCMC methods approximate gradients by stochastic ones, commonly via uniformly subsampled data points, and achieve improved computational efficiency, however at the price of introducing sampling error. We propose a non-uniform subsampling scheme to improve the sampling accuracy. The proposed exponentially weighted stochastic gradient (EWSG) is designed so that a non-uniform-SG-MCMC method mimics the statistical behavior of a batch-gradient-MCMC method, and hence the inaccuracy due to SG approximation is reduced. EWSG differs from classical variance reduction (VR) techniques as it focuses on the entire distribution instead of just the variance; nevertheless, its reduced local variance is also proved. EWSG can also be viewed as an extension of the importance sampling idea, successful for stochastic-gradient-based optimizations, to sampling tasks. In our practical implementation of EWSG, the non-uniform subsampling is performed efficiently via a Metropolis-Hastings chain on the data index, which is coupled to the MCMC algorithm. Numerical experiments are provided, not only to demonstrate EWSG's effectiveness, but also to guide hyperparameter choices, and validate our <i>non-asymptotic global error bound</i> despite of approximations in the implementation. Notably, while statistical accuracy is improved, convergence speed can be comparable to the uniform version, which renders EWSG a practical alternative to VR (but EWSG and VR can be combined too).</p>


Sign in / Sign up

Export Citation Format

Share Document