scholarly journals Tail bounds on hitting times of randomized search heuristics using variable drift analysis

Author(s):  
P. K. Lehre ◽  
C. Witt

Abstract Drift analysis is one of the state-of-the-art techniques for the runtime analysis of randomized search heuristics (RSHs) such as evolutionary algorithms (EAs), simulated annealing, etc. The vast majority of existing drift theorems yield bounds on the expected value of the hitting time for a target state, for example the set of optimal solutions, without making additional statements on the distribution of this time. We address this lack by providing a general drift theorem that includes bounds on the upper and lower tail of the hitting time distribution. The new tail bounds are applied to prove very precise sharp-concentration results on the running time of a simple EA on standard benchmark problems, including the class of general linear functions. On all these problems, the probability of deviating by an r-factor in lower-order terms of the expected time decreases exponentially with r. The usefulness of the theorem outside the theory of RSHs is demonstrated by deriving tail bounds on the number of cycles in random permutations. All these results handle a position-dependent (variable) drift that was not covered by previous drift theorems with tail bounds. Finally, user-friendly specializations of the general drift theorem are given.

2013 ◽  
Vol 22 (2) ◽  
pp. 294-318 ◽  
Author(s):  
CARSTEN WITT

The analysis of randomized search heuristics on classes of functions is fundamental to the understanding of the underlying stochastic process and the development of suitable proof techniques. Recently, remarkable progress has been made in bounding the expected optimization time of a simple evolutionary algorithm, called (1+1) EA, on the class of linear functions. We improve the previously best known bound in this setting from (1.39+o(1))en ln n to en ln n+O(n) in expectation and with high probability, which is tight up to lower-order terms. Moreover, upper and lower bounds for arbitrary mutation probabilities p are derived, which imply expected polynomial optimization time as long as p = O((ln n)/n) and p = Ω(n−C) for a constant C > 0, and which are tight if p = c/n for a constant c > 0. As a consequence, the standard mutation probability p = 1/n is optimal for all linear functions, and the (1+1) EA is found to be an optimal mutation-based algorithm. Furthermore, the algorithm turns out to be surprisingly robust since the large neighbourhood explored by the mutation operator does not disrupt the search.


Sign in / Sign up

Export Citation Format

Share Document