scholarly journals Analysis of Linear Convergence of a (1 + 1)-ES with Augmented Lagrangian Constraint Handling

Author(s):  
Asma Atamna ◽  
Anne Auger ◽  
Nikolaus Hansen
2021 ◽  
Author(s):  
Yuan Jin ◽  
Zheyi Yang ◽  
Shiran Dai ◽  
Yann Lebret ◽  
Olivier Jung

Abstract Many engineering problems involve complex constraints which can be computationally costly. To reduce the overall numerical cost, such constrained optimization problems are solved via surrogate models constructed on a Design of Experiment (DoE). Meanwhile, complex constraints may lead to infeasible initial DoE, which can be problematic for subsequent sequential optimization. In this study, we address constrained optimization problem in a Bayesian optimization framework. A comparative study is conducted to evaluate the performance of three approaches namely Expected Feasible Improvement (EFI) and slack Augmented Lagrangian method (AL) and Expected Improvement with Probabilistic Support Vector Machine in constraint handling with feasible or infeasible initial DoE. AL is capable to start sequential optimization with infeasible initial DoE, while EFI requires extra a priori enrichment to find at least one feasible sample. Empirical experiments are performed on both analytical functions and a low pressure turbine disc design problem. Through these benchmark problems, EFI and AL are shown to have overall similar performance in problems with inequality constraints. However, the performance of EIPSVM is affected strongly by the corresponding hyperparameter values. In addition, we show evidences that with an appropriate handling of infeasible initial DoE, EFI does not necessarily underperform compared with AL solving optimization problems with mixed inequality and equality constraints.


2021 ◽  
Vol 3 (1) ◽  
pp. 89-117
Author(s):  
Yangyang Xu

First-order methods (FOMs) have been popularly used for solving large-scale problems. However, many existing works only consider unconstrained problems or those with simple constraint. In this paper, we develop two FOMs for constrained convex programs, where the constraint set is represented by affine equations and smooth nonlinear inequalities. Both methods are based on the classical augmented Lagrangian function. They update the multipliers in the same way as the augmented Lagrangian method (ALM) but use different primal updates. The first method, at each iteration, performs a single proximal gradient step to the primal variable, and the second method is a block update version of the first one. For the first method, we establish its global iterate convergence and global sublinear and local linear convergence, and for the second method, we show a global sublinear convergence result in expectation. Numerical experiments are carried out on the basis pursuit denoising, convex quadratically constrained quadratic programs, and the Neyman-Pearson classification problem to show the empirical performance of the proposed methods. Their numerical behaviors closely match the established theoretical results.


Sign in / Sign up

Export Citation Format

Share Document