Online Black-Box Algorithm Portfolios for Continuous Optimization

Author(s):  
Petr Baudiš ◽  
Petr Pošík
Author(s):  
Laurens Bliek ◽  
Sicco Verwer ◽  
Mathijs de Weerdt

Abstract When a black-box optimization objective can only be evaluated with costly or noisy measurements, most standard optimization algorithms are unsuited to find the optimal solution. Specialized algorithms that deal with exactly this situation make use of surrogate models. These models are usually continuous and smooth, which is beneficial for continuous optimization problems, but not necessarily for combinatorial problems. However, by choosing the basis functions of the surrogate model in a certain way, we show that it can be guaranteed that the optimal solution of the surrogate model is integer. This approach outperforms random search, simulated annealing and a Bayesian optimization algorithm on the problem of finding robust routes for a noise-perturbed traveling salesman benchmark problem, with similar performance as another Bayesian optimization algorithm, and outperforms all compared algorithms on a convex binary optimization problem with a large number of variables.


2015 ◽  
Vol 317 ◽  
pp. 224-245 ◽  
Author(s):  
Mario A. Muñoz ◽  
Yuan Sun ◽  
Michael Kirley ◽  
Saman K. Halgamuge

2017 ◽  
Vol 25 (1) ◽  
pp. 143-171 ◽  
Author(s):  
Ilya Loshchilov

Limited-memory BFGS (L-BFGS; Liu and Nocedal, 1989 ) is often considered to be the method of choice for continuous optimization when first- or second-order information is available. However, the use of L-BFGS can be complicated in a black box scenario where gradient information is not available and therefore should be numerically estimated. The accuracy of this estimation, obtained by finite difference methods, is often problem-dependent and may lead to premature convergence of the algorithm. This article demonstrates an alternative to L-BFGS, the limited memory covariance matrix adaptation evolution strategy (LM-CMA) proposed by Loshchilov ( 2014 ). LM-CMA is a stochastic derivative-free algorithm for numerical optimization of nonlinear, nonconvex optimization problems. Inspired by L-BFGS, LM-CMA samples candidate solutions according to a covariance matrix reproduced from m direction vectors selected during the optimization process. The decomposition of the covariance matrix into Cholesky factors allows reducing the memory complexity to [Formula: see text], where n is the number of decision variables. The time complexity of sampling one candidate solution is also [Formula: see text] but scales as only about 25 scalar-vector multiplications in practice. The algorithm has an important property of invariance with respect to strictly increasing transformations of the objective function; such transformations do not compromise its ability to approach the optimum. LM-CMA outperforms the original CMA-ES and its large-scale versions on nonseparable ill-conditioned problems with a factor increasing with problem dimension. Invariance properties of the algorithm do not prevent it from demonstrating a comparable performance to L-BFGS on nontrivial large-scale smooth and nonsmooth optimization problems.


2010 ◽  
Vol 41 (1) ◽  
pp. 10
Author(s):  
KERRI WACHTER
Keyword(s):  

2005 ◽  
Vol 38 (7) ◽  
pp. 49
Author(s):  
DEEANNA FRANKLIN
Keyword(s):  

2005 ◽  
Vol 38 (9) ◽  
pp. 31
Author(s):  
BETSY BATES
Keyword(s):  

2007 ◽  
Vol 40 (23) ◽  
pp. 7
Author(s):  
ELIZABETH MECHCATIE
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document