scholarly journals A Variable Neighborhood Walksat-Based Algorithm for MAX-SAT Problems

2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Noureddine Bouhmala

The simplicity of the maximum satisfiability problem (MAX-SAT) combined with its applicability in many areas of artificial intelligence and computing science made it one of the fundamental optimization problems. This NP-complete problem refers to the task of finding a variable assignment that satisfies the maximum number of clauses (or the sum of weights of satisfied clauses) in a Boolean formula. The Walksat algorithm is considered to be the main skeleton underlying almost all local search algorithms for MAX-SAT. Most local search algorithms including Walksat rely on the 1-flip neighborhood structure. This paper introduces a variable neighborhood walksat-based algorithm. The neighborhood structure can be combined easily using any local search algorithm. Its effectiveness is compared with existing algorithms using 1-flip neighborhood structure and solvers such as CCLS and Optimax from the eighth MAX-SAT evaluation.

2011 ◽  
Vol 41 ◽  
pp. 407-444 ◽  
Author(s):  
A. György ◽  
L. Kocsis

Local search algorithms applied to optimization problems often suffer from getting trapped in a local optimum. The common solution for this deficiency is to restart the algorithm when no progress is observed. Alternatively, one can start multiple instances of a local search algorithm, and allocate computational resources (in particular, processing time) to the instances depending on their behavior. Hence, a multi-start strategy has to decide (dynamically) when to allocate additional resources to a particular instance and when to start new instances. In this paper we propose multi-start strategies motivated by works on multi-armed bandit problems and Lipschitz optimization with an unknown constant. The strategies continuously estimate the potential performance of each algorithm instance by supposing a convergence rate of the local search algorithm up to an unknown constant, and in every phase allocate resources to those instances that could converge to the optimum for a particular range of the constant. Asymptotic bounds are given on the performance of the strategies. In particular, we prove that at most a quadratic increase in the number of times the target function is evaluated is needed to achieve the performance of a local search algorithm started from the attraction region of the optimum. Experiments are provided using SPSA (Simultaneous Perturbation Stochastic Approximation) and k-means as local search algorithms, and the results indicate that the proposed strategies work well in practice, and, in all cases studied, need only logarithmically more evaluations of the target function as opposed to the theoretically suggested quadratic increase.


Author(s):  
Tal Ze'evi ◽  
Roie Zivan ◽  
Omer Lev

Partial Cooperation is a paradigm and a corresponding model, proposed to represent multi-agent systems in which agents are willing to cooperate to achieve a global goal, as long as some minimal threshold on their personal utility is satisfied. Distributed local search algorithms were proposed in order to solve asymmetric distributed constraint optimization problems (ADCOPs) in which agents are partially cooperative. We contribute by: 1) extending the partial cooperative model to allow it to represent dynamic cooperation intentions, affected by changes in agents’ wealth, in accordance with social studies literature. 2) proposing a novel local search algorithm in which agents receive indications of others’ preferences on their actions and thus, can perform actions that are socially beneficial. Our empirical study reveals the advantage of the proposed algorithm in multiple benchmarks. Specifically, on realistic meeting scheduling problems it overcomes limitations of standard local search algorithms.


2008 ◽  
Vol 105 (40) ◽  
pp. 15253-15257 ◽  
Author(s):  
Mikko Alava ◽  
John Ardelius ◽  
Erik Aurell ◽  
Petteri Kaski ◽  
Supriya Krishnamurthy ◽  
...  

We study the performance of stochastic local search algorithms for random instances of the K-satisfiability (K-SAT) problem. We present a stochastic local search algorithm, ChainSAT, which moves in the energy landscape of a problem instance by never going upwards in energy. ChainSAT is a focused algorithm in the sense that it focuses on variables occurring in unsatisfied clauses. We show by extensive numerical investigations that ChainSAT and other focused algorithms solve large K-SAT instances almost surely in linear time, up to high clause-to-variable ratios α; for example, for K = 4 we observe linear-time performance well beyond the recently postulated clustering and condensation transitions in the solution space. The performance of ChainSAT is a surprise given that by design the algorithm gets trapped into the first local energy minimum it encounters, yet no such minima are encountered. We also study the geometry of the solution space as accessed by stochastic local search algorithms.


2018 ◽  
Vol 210 ◽  
pp. 04052 ◽  
Author(s):  
Nadia Abd-Alsabour

Local search algorithms perform an important role when being employed with optimization algorithms tackling numerous optimization problems since they lead to getting better solutions. However, this is not practical in many applications as they do not contribute to the search process. This was not much studied previously for traditional optimization algorithms or for parallel optimization algorithms. This paper investigates this issue for parallel optimization algorithms when tackling high dimensional subset problems. The acquired results show impressive recommendations.


Author(s):  
Atheer Bassel ◽  
Hussein M. Haglan ◽  
Akeel Sh. Mahmoud

<p>Optimization process is normally implemented to solve several objectives in the form of single or multi-objectives modes. Some traditional optimization techniques are computationally burdensome which required exhaustive computational times. Thus, many studies have invented new optimization techniques to address the issues. To realize the effectiveness of the proposed techniques, implementation on several benchmark functions is crucial. In solving benchmark test functions, local search algorithms have been rigorously examined and employed to diverse tasks. This paper highlights different algorithms implemented to solve several problems. The capacity of local search algorithms in the resolution of engineering optimization problem including benchmark test functions is reviewed. The use of local search algorithms, mainly Simulated Annealing (SA) and Great Deluge (GD) according to solve different problems is presented. Improvements and hybridization of the local search and global search algorithms are also reviewed in this paper. Consequently, benchmark test functions are proposed to those involved in local search algorithm.</p>


2010 ◽  
Vol 33 (7) ◽  
pp. 1127-1139
Author(s):  
Da-Ming ZHU ◽  
Shao-Han MA ◽  
Ping-Ping ZHANG

Sign in / Sign up

Export Citation Format

Share Document