scholarly journals Statistics of Robust Optimization: A Generalized Empirical Likelihood Approach

Author(s):  
John C. Duchi ◽  
Peter W. Glynn ◽  
Hongseok Namkoong

We study statistical inference and distributionally robust solution methods for stochastic optimization problems, focusing on confidence intervals for optimal values and solutions that achieve exact coverage asymptotically. We develop a generalized empirical likelihood framework—based on distributional uncertainty sets constructed from nonparametric f-divergence balls—for Hadamard differentiable functionals, and in particular, stochastic optimization problems. As consequences of this theory, we provide a principled method for choosing the size of distributional uncertainty regions to provide one- and two-sided confidence intervals that achieve exact coverage. We also give an asymptotic expansion for our distributionally robust formulation, showing how robustification regularizes problems by their variance. Finally, we show that optimizers of the distributionally robust formulations we study enjoy (essentially) the same consistency properties as those in classical sample average approximations. Our general approach applies to quickly mixing stationary sequences, including geometrically ergodic Harris recurrent Markov chains.

Author(s):  
Edward Anderson ◽  
Andy Philpott

Sample average approximation is a popular approach to solving stochastic optimization problems. It has been widely observed that some form of robustification of these problems often improves the out-of-sample performance of the solution estimators. In estimation problems, this improvement boils down to a trade-off between the opposing effects of bias and shrinkage. This paper aims to characterize the features of more general optimization problems that exhibit this behaviour when a distributionally robust version of the sample average approximation problem is used. The paper restricts attention to quadratic problems for which sample average approximation solutions are unbiased and shows that expected out-of-sample performance can be calculated for small amounts of robustification and depends on the type of distributionally robust model used and properties of the underlying ground-truth probability distribution of random variables. The paper was written as part of a New Zealand funded research project that aimed to improve stochastic optimization methods in the electric power industry. The authors of the paper have worked together in this domain for the past 25 years.


1997 ◽  
Vol 84 (3) ◽  
pp. 1109-1112 ◽  
Author(s):  
M. B. Gitman ◽  
P. V. Trusov ◽  
S. A. Fedoseev

Author(s):  
M. Hoffhues ◽  
W. Römisch ◽  
T. M. Surowiec

AbstractThe vast majority of stochastic optimization problems require the approximation of the underlying probability measure, e.g., by sampling or using observations. It is therefore crucial to understand the dependence of the optimal value and optimal solutions on these approximations as the sample size increases or more data becomes available. Due to the weak convergence properties of sequences of probability measures, there is no guarantee that these quantities will exhibit favorable asymptotic properties. We consider a class of infinite-dimensional stochastic optimization problems inspired by recent work on PDE-constrained optimization as well as functional data analysis. For this class of problems, we provide both qualitative and quantitative stability results on the optimal value and optimal solutions. In both cases, we make use of the method of probability metrics. The optimal values are shown to be Lipschitz continuous with respect to a minimal information metric and consequently, under further regularity assumptions, with respect to certain Fortet-Mourier and Wasserstein metrics. We prove that even in the most favorable setting, the solutions are at best Hölder continuous with respect to changes in the underlying measure. The theoretical results are tested in the context of Monte Carlo approximation for a numerical example involving PDE-constrained optimization under uncertainty.


2012 ◽  
Vol 215-216 ◽  
pp. 133-137
Author(s):  
Guo Shao Su ◽  
Yan Zhang ◽  
Zhen Xing Wu ◽  
Liu Bin Yan

Covariance matrix adaptation evolution strategy algorithm (CMA-ES) is a newly evolution algorithm. It has become a powerful tool for solving highly nonlinear multi-peak optimization problems. In many real-world optimization problems, the location of multiple optima is often required in a search space. In order to evaluate the solution, thousands of fitness function evaluations are involved that is a time consuming or expensive processes. Therefore, conventional stochastic optimization methods meet a special challenge for a very large number of problem function evaluations. Aiming to overcome the shortcoming of stochastic optimization methods in the high calculation cost, a truss optimal method based on CMA-ES algorithm is proposed and applied to solve the section and shape optimization problems of trusses. The study results show that the method is feasible and has the advantages of high accuracy, high efficiency and easy implementation.


2021 ◽  
Author(s):  
Xiting Gong ◽  
Tong Wang

Preservation Results for Proving Additively Convex Value Functions for High-Dimensional Stochastic Optimization Problems


Sign in / Sign up

Export Citation Format

Share Document