automatic algorithm configuration
Recently Published Documents


TOTAL DOCUMENTS

25
(FIVE YEARS 10)

H-INDEX

7
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Soheila Ghambari ◽  
Hojjat Rakhshani ◽  
Julien Lepagnot ◽  
Laetitia Jourdan ◽  
Lhassane Idoumghar

2021 ◽  
Author(s):  
Soheila Ghambari ◽  
Hojjat Rakhshani ◽  
Julien Lepagnot ◽  
Laetitia Jourdan ◽  
Lhassane Idoumghar

Abstract Optimization algorithms often have several critical setting parameters and the improvement of the empirical performance of these algorithms depends on tuning them. Manually configuration of such parameters is a tedious task that results in unsatisfactory outputs. Therefore, several automatic algorithm configuration frameworks have been proposed to regulate the parameters of a given algorithm for a series of problem instances. Although the developed frameworks perform very well to deal with various problems, however, there is still a trade-off between the accuracy and budget requirements that need to be addressed. This work investigates the performance of unbalanced distribution of budget for different configurations to deal with the automatic algorithm configuration problem. Inspired by the bandit-based approaches, the main goal is to find a better configuration that substantially improves the performance of the target algorithm while using a smaller run time budget. In this work, non-dominated sorting genetic algorithm II (NSGA-II) is employed as a target algorithm using jMetalPy software platform and the multimodal multi-objective optimization (MMO) test suite of CEC'2020 is used as a set of test problems. We did a comprehensive comparison with other known methods including random search, Bayesian optimization, SMAC, ParamILS, irace, and MAC methods. The experimental results interestingly proved the efficiency of the proposed approach for automatic algorithm configuration with a minimum time budget in comparison with other competitors.


2020 ◽  
Vol 34 (03) ◽  
pp. 2384-2391
Author(s):  
Shengcai Liu ◽  
Ke Tang ◽  
Yunwei Lei ◽  
Xin Yao

Over the last decade, research on automated parameter tuning, often referred to as automatic algorithm configuration (AAC), has made significant progress. Although the usefulness of such tools has been widely recognized in real world applications, the theoretical foundations of AAC are still very weak. This paper addresses this gap by studying the performance estimation problem in AAC. More specifically, this paper first proves the universal best performance estimator in a practical setting, and then establishes theoretical bounds on the estimation error, i.e., the difference between the training performance and the true performance for a parameter configuration, considering finite and infinite configuration spaces respectively. These findings were verified in extensive experiments conducted on four algorithm configuration scenarios involving different problem domains. Moreover, insights for enhancing existing AAC methods are also identified.


10.29007/vpd6 ◽  
2019 ◽  
Author(s):  
Anastasia Leventi-Peetz ◽  
Oliver Zendel ◽  
Werner Lennartz ◽  
Kai Weber

Performing hundreds of test runs and a source-code analysis, we empirically identified improved parameter configurations for the CryptoMiniSat (CMS) 5 for solving crypto- graphic CNF instances originating from algebraic known-plaintext attacks on 3 rounds encryption of the Small AES-64 model cipher SR(3, 4, 4, 4). We finally became able to reconstruct 64-bit long keys in under an hour real time which, to our knowledge, has never been achieved so far. Especially, not without any assumptions or previous knowledge of key-bits (for instance in the form of side-channels, as in [11]). A statistical analysis of the non-deterministic solver runtimes was carried out and command line parameter combinations were defined to yield best runtimes which ranged from under an hour to a few hours in median at the beginning. We proceeded using an Automatic Algorithm Configuration (AAC) tool to systematically extend the search for even better solver configurations with success to deliver even shorter solving times. In this work we elaborate on the systematics we followed to reach our results in a traceable and reproducible way. The ultimate focus of our investigations is to find out if CMS, when appropriately tuned, is indeed capable to attack even bigger and harder problems than the here solved ones. For the domain of cryptographic research, the duration of the solving time plays an inferior role as compared to the practical feasibility of finding a solution to the problem. The perspective scalability of the here presented results is the object of further investigations.


2019 ◽  
Vol 27 (1) ◽  
pp. 147-171 ◽  
Author(s):  
Aymeric Blot ◽  
Marie-Éléonore Kessaci ◽  
Laetitia Jourdan ◽  
Holger H. Hoos

Automatic algorithm configuration (AAC) is becoming a key ingredient in the design of high-performance solvers for challenging optimisation problems. However, most existing work on AAC deals with configuration procedures that optimise a single performance metric of a given, single-objective algorithm. Of course, these configurators can also be used to optimise the performance of multi-objective algorithms, as measured by a single performance indicator. In this work, we demonstrate that better results can be obtained by using a native, multi-objective algorithm configuration procedure. Specifically, we compare three AAC approaches: one considering only the hypervolume indicator, a second optimising the weighted sum of hypervolume and spread, and a third that simultaneously optimises these complementary indicators, using a genuinely multi-objective approach. We assess these approaches by applying them to a highly-parametric local search framework for two widely studied multi-objective optimisation problems, the bi-objective permutation flowshop and travelling salesman problems. Our results show that multi-objective algorithms are indeed best configured using a multi-objective configurator.


2019 ◽  
Vol 27 (1) ◽  
pp. 129-145 ◽  
Author(s):  
Simon Wessing ◽  
Manuel López-Ibáñez

The configuration of algorithms is a laborious and difficult process. Thus, it is advisable to automate this task by using appropriate automatic configuration methods. The [Formula: see text] method is among the most widely used in the literature. By default, [Formula: see text] initializes its search process via uniform sampling of algorithm configurations. Although better initialization methods exist in the literature, the mixed-variable (numerical and categorical) nature of typical parameter spaces and the presence of conditional parameters make most of the methods not applicable in practice. Here, we present an improved initialization method that overcomes these limitations by employing concepts from the design and analysis of computer experiments with branching and nested factors. Our results show that this initialization method is not only better, in some scenarios, than the uniform sampling used by the current version of [Formula: see text], but also better than other initialization methods present in other automatic configuration methods.


Sign in / Sign up

Export Citation Format

Share Document