An Adaptive Gaussian Process-Based Search for Stochastically Constrained Optimization via Simulation

Author(s):  
Wenjie Chen ◽  
Hainan Guo ◽  
Kwok-Leung Tsui
Author(s):  
Mark Semelhago ◽  
Barry L. Nelson ◽  
Eunhye Song ◽  
Andreas Wächter

Inference-based optimization via simulation, which substitutes Gaussian process (GP) learning for the structural properties exploited in mathematical programming, is a powerful paradigm that has been shown to be remarkably effective in problems of modest feasible-region size and decision-variable dimension. The limitation to “modest” problems is a result of the computational overhead and numerical challenges encountered in computing the GP conditional (posterior) distribution on each iteration. In this paper, we substantially expand the size of discrete-decision-variable optimization-via-simulation problems that can be attacked in this way by exploiting a particular GP—discrete Gaussian Markov random fields—and carefully tailored computational methods. The result is the rapid Gaussian Markov Improvement Algorithm (rGMIA), an algorithm that delivers both a global convergence guarantee and finite-sample optimality-gap inference for significantly larger problems. Between infrequent evaluations of the global conditional distribution, rGMIA applies the full power of GP learning to rapidly search smaller sets of promising feasible solutions that need not be spatially close. We carefully document the computational savings via complexity analysis and an extensive empirical study. Summary of Contribution: The broad topic of the paper is optimization via simulation, which means optimizing some performance measure of a system that may only be estimated by executing a stochastic, discrete-event simulation. Stochastic simulation is a core topic and method of operations research. The focus of this paper is on significantly speeding-up the computations underlying an existing method that is based on Gaussian process learning, where the underlying Gaussian process is a discrete Gaussian Markov Random Field. This speed-up is accomplished by employing smart computational linear algebra, state-of-the-art algorithms, and a careful divide-and-conquer evaluation strategy. Problems of significantly greater size than any other existing algorithm with similar guarantees can solve are solved as illustrations.


2007 ◽  
Vol 44 (02) ◽  
pp. 393-408 ◽  
Author(s):  
Allan Sly

Multifractional Brownian motion is a Gaussian process which has changing scaling properties generated by varying the local Hölder exponent. We show that multifractional Brownian motion is very sensitive to changes in the selected Hölder exponent and has extreme changes in magnitude. We suggest an alternative stochastic process, called integrated fractional white noise, which retains the important local properties but avoids the undesirable oscillations in magnitude. We also show how the Hölder exponent can be estimated locally from discrete data in this model.


1987 ◽  
Vol 26 (03) ◽  
pp. 117-123
Author(s):  
P. Tautu ◽  
G. Wagner

SummaryA continuous parameter, stationary Gaussian process is introduced as a first approach to the probabilistic representation of the phenotype inheritance process. With some specific assumptions about the components of the covariance function, it may describe the temporal behaviour of the “cancer-proneness phenotype” (CPF) as a quantitative continuous trait. Upcrossing a fixed level (“threshold”) u and reaching level zero are the extremes of the Gaussian process considered; it is assumed that they might be interpreted as the transformation of CPF into a “neoplastic disease phenotype” or as the non-proneness to cancer, respectively.


Sign in / Sign up

Export Citation Format

Share Document