Smooth sample average approximation of stationary points in nonsmooth stochastic optimization and applications

2008 ◽  
Vol 119 (2) ◽  
pp. 371-401 ◽  
Author(s):  
Huifu Xu ◽  
Dali Zhang
2011 ◽  
Vol 28 (06) ◽  
pp. 755-771 ◽  
Author(s):  
YONGCHAO LIU ◽  
GUI-HUA LIN

Regularization method proposed by Scholtes (2011) has been a recognized approach for deterministic mathematical programs with complementarity constraints (MPCC). Meng and Xu (2006) applied the approach coupled with Monte Carlo techniques to solve a class of one stage stochastic MPCC and presented some promising numerical results. However, Meng and Xu have not presented any convergence analysis of the regularized sample approximation method. In this paper, we fill out this gap. Specifically, we consider a general class of one stage stochastic mathematical programs with complementarity constraint where the objective and constraint functions are expected values of random functions. We carry out extensive convergence analysis of the regularized sample average approximation problems including the convergence of statistical estimators of optimal solutions, C-stationary points, M-stationary points and B-stationary points as sample size increases and the regularization parameter tends to zero.


2013 ◽  
Vol 303-306 ◽  
pp. 1319-1322
Author(s):  
Yun Yun Nie

Min-max stochastic optimization is a kind of important problems in stochastic optimization, which has been widely applied in subjects such as inventory theory and robust optimization and engineering field. In this paper, we present sample average approximation(SAA) method for a class of min-max stochastic optimization problems, based on a nonlinear Lagrangian function. Convergence of the SAA estimators are analyzed by means of epi-convergence theory,when the Lagrange multiplier vector is optimal and the parameter is small enough.


Author(s):  
Edward Anderson ◽  
Andy Philpott

Sample average approximation is a popular approach to solving stochastic optimization problems. It has been widely observed that some form of robustification of these problems often improves the out-of-sample performance of the solution estimators. In estimation problems, this improvement boils down to a trade-off between the opposing effects of bias and shrinkage. This paper aims to characterize the features of more general optimization problems that exhibit this behaviour when a distributionally robust version of the sample average approximation problem is used. The paper restricts attention to quadratic problems for which sample average approximation solutions are unbiased and shows that expected out-of-sample performance can be calculated for small amounts of robustification and depends on the type of distributionally robust model used and properties of the underlying ground-truth probability distribution of random variables. The paper was written as part of a New Zealand funded research project that aimed to improve stochastic optimization methods in the electric power industry. The authors of the paper have worked together in this domain for the past 25 years.


Sign in / Sign up

Export Citation Format

Share Document