scholarly journals Adaptive Genetic Local Search Algorithms for Solving Reliability Optimization Problems

2004 ◽  
Vol 124 (10) ◽  
pp. 1986-1990 ◽  
Author(s):  
Minoru Mukuda ◽  
YoungSu Yun ◽  
Mitsuo Gen
2018 ◽  
Vol 210 ◽  
pp. 04052 ◽  
Author(s):  
Nadia Abd-Alsabour

Local search algorithms perform an important role when being employed with optimization algorithms tackling numerous optimization problems since they lead to getting better solutions. However, this is not practical in many applications as they do not contribute to the search process. This was not much studied previously for traditional optimization algorithms or for parallel optimization algorithms. This paper investigates this issue for parallel optimization algorithms when tackling high dimensional subset problems. The acquired results show impressive recommendations.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Noureddine Bouhmala

The simplicity of the maximum satisfiability problem (MAX-SAT) combined with its applicability in many areas of artificial intelligence and computing science made it one of the fundamental optimization problems. This NP-complete problem refers to the task of finding a variable assignment that satisfies the maximum number of clauses (or the sum of weights of satisfied clauses) in a Boolean formula. The Walksat algorithm is considered to be the main skeleton underlying almost all local search algorithms for MAX-SAT. Most local search algorithms including Walksat rely on the 1-flip neighborhood structure. This paper introduces a variable neighborhood walksat-based algorithm. The neighborhood structure can be combined easily using any local search algorithm. Its effectiveness is compared with existing algorithms using 1-flip neighborhood structure and solvers such as CCLS and Optimax from the eighth MAX-SAT evaluation.


2011 ◽  
Vol 41 ◽  
pp. 407-444 ◽  
Author(s):  
A. György ◽  
L. Kocsis

Local search algorithms applied to optimization problems often suffer from getting trapped in a local optimum. The common solution for this deficiency is to restart the algorithm when no progress is observed. Alternatively, one can start multiple instances of a local search algorithm, and allocate computational resources (in particular, processing time) to the instances depending on their behavior. Hence, a multi-start strategy has to decide (dynamically) when to allocate additional resources to a particular instance and when to start new instances. In this paper we propose multi-start strategies motivated by works on multi-armed bandit problems and Lipschitz optimization with an unknown constant. The strategies continuously estimate the potential performance of each algorithm instance by supposing a convergence rate of the local search algorithm up to an unknown constant, and in every phase allocate resources to those instances that could converge to the optimum for a particular range of the constant. Asymptotic bounds are given on the performance of the strategies. In particular, we prove that at most a quadratic increase in the number of times the target function is evaluated is needed to achieve the performance of a local search algorithm started from the attraction region of the optimum. Experiments are provided using SPSA (Simultaneous Perturbation Stochastic Approximation) and k-means as local search algorithms, and the results indicate that the proposed strategies work well in practice, and, in all cases studied, need only logarithmically more evaluations of the target function as opposed to the theoretically suggested quadratic increase.


Sign in / Sign up

Export Citation Format

Share Document