Penalty Function with Memory for Discrete Optimization via Simulation with Stochastic Constraints

2015 ◽  
Vol 63 (5) ◽  
pp. 1195-1212 ◽  
Author(s):  
Chuljin Park ◽  
Seong-Hee Kim
2021 ◽  
Vol 31 (4) ◽  
pp. 1-26
Author(s):  
Jungmin Han ◽  
Seong-Hee Kim ◽  
Chuljin Park

Penalty function with memory (PFM) in Park and Kim [2015] is proposed for discrete optimization via simulation problems with multiple stochastic constraints where performance measures of both an objective and constraints can be estimated only by stochastic simulation. The original PFM is shown to perform well, finding a true best feasible solution with a higher probability than other competitors even when constraints are tight or near-tight. However, PFM applies simple budget allocation rules (e.g., assigning an equal number of additional observations) to solutions sampled at each search iteration and uses a rather complicated penalty sequence with several user-specified parameters. In this article, we propose an improved version of PFM, namely IPFM, which can combine the PFM with any simulation budget allocation procedure that satisfies some conditions within a general DOvS framework. We present a version of a simulation budget allocation procedure useful for IPFM and introduce a new penalty sequence, namely PS 2 + , which is simpler than the original penalty sequence yet holds convergence properties within IPFM with better finite-sample performances. Asymptotic convergence properties of IPFM with PS 2 + are proved. Our numerical results show that the proposed method greatly improves both efficiency and accuracy compared to the original PFM.


Sign in / Sign up

Export Citation Format

Share Document