scholarly journals On invariance and linear convergence of evolution strategies with augmented Lagrangian constraint handling

2020 ◽  
Vol 832 ◽  
pp. 68-97
Author(s):  
Asma Atamna ◽  
Anne Auger ◽  
Nikolaus Hansen
2021 ◽  
pp. 1-25
Author(s):  
Tobias Glasmachers ◽  
Oswin Krause

Abstract The class of algorithms called Hessian Estimation Evolution Strategies (HE-ESs) update the covariance matrix of their sampling distribution by directly estimating the curvature of the objective function. The approach is practically efficient, as attested by respectable performance on the BBOB testbed, even on rather irregular functions. In this paper we formally prove two strong guarantees for the (1+4)-HE-ES, a minimal elitist member of the family: stability of the covariance matrix update, and as a consequence, linear convergence on all convex quadratic problems at a rate that is independent of the problem instance.


2021 ◽  
Author(s):  
Yuan Jin ◽  
Zheyi Yang ◽  
Shiran Dai ◽  
Yann Lebret ◽  
Olivier Jung

Abstract Many engineering problems involve complex constraints which can be computationally costly. To reduce the overall numerical cost, such constrained optimization problems are solved via surrogate models constructed on a Design of Experiment (DoE). Meanwhile, complex constraints may lead to infeasible initial DoE, which can be problematic for subsequent sequential optimization. In this study, we address constrained optimization problem in a Bayesian optimization framework. A comparative study is conducted to evaluate the performance of three approaches namely Expected Feasible Improvement (EFI) and slack Augmented Lagrangian method (AL) and Expected Improvement with Probabilistic Support Vector Machine in constraint handling with feasible or infeasible initial DoE. AL is capable to start sequential optimization with infeasible initial DoE, while EFI requires extra a priori enrichment to find at least one feasible sample. Empirical experiments are performed on both analytical functions and a low pressure turbine disc design problem. Through these benchmark problems, EFI and AL are shown to have overall similar performance in problems with inequality constraints. However, the performance of EIPSVM is affected strongly by the corresponding hyperparameter values. In addition, we show evidences that with an appropriate handling of infeasible initial DoE, EFI does not necessarily underperform compared with AL solving optimization problems with mixed inequality and equality constraints.


2021 ◽  
Vol 3 (1) ◽  
pp. 89-117
Author(s):  
Yangyang Xu

First-order methods (FOMs) have been popularly used for solving large-scale problems. However, many existing works only consider unconstrained problems or those with simple constraint. In this paper, we develop two FOMs for constrained convex programs, where the constraint set is represented by affine equations and smooth nonlinear inequalities. Both methods are based on the classical augmented Lagrangian function. They update the multipliers in the same way as the augmented Lagrangian method (ALM) but use different primal updates. The first method, at each iteration, performs a single proximal gradient step to the primal variable, and the second method is a block update version of the first one. For the first method, we establish its global iterate convergence and global sublinear and local linear convergence, and for the second method, we show a global sublinear convergence result in expectation. Numerical experiments are carried out on the basis pursuit denoising, convex quadratically constrained quadratic programs, and the Neyman-Pearson classification problem to show the empirical performance of the proposed methods. Their numerical behaviors closely match the established theoretical results.


2010 ◽  
Vol 2010 ◽  
pp. 1-11 ◽  
Author(s):  
Oliver Kramer

Evolution strategies are successful global optimization methods. In many practical numerical problems constraints are not explicitly given. Evolution strategies have to incorporate techniques to optimize in restricted solution spaces. Famous constraint-handling techniques are penalty and multiobjective approaches. Past work has shown that in particular an ill-conditioned alignment between the coordinate system of Gaussian mutation and the constraint boundaries leads to premature convergence. Covariance matrix adaptation evolution strategies offer a solution to this alignment problem. Last, metamodeling of the constraint boundary leads to significant savings of constraint function calls and to a speedup by repairing infeasible solutions. This work gives a brief overview over constraint-handling methods for evolution strategies by demonstrating the approaches experimentally on two exemplary constrained problems.


Sign in / Sign up

Export Citation Format

Share Document