VLSs: A Local Search Algorithm for Distributed Constraint Optimization Problems

Author(s):  
Fukui Li ◽  
Jingyuan He ◽  
Mingliang Zhou ◽  
Bin Fang

Local search algorithms are widely applied in solving large-scale distributed constraint optimization problem (DCOP). Distributed stochastic algorithm (DSA) is a typical local search algorithm to solve DCOP. However, DSA has some drawbacks including easily falling into local optima and the unfairness of assignment choice. This paper presents a novel local search algorithm named VLSs to solve the issues. In VLSs, sampling according to the probability corresponding to assignment is introduced to enable each agent to choose other promising values. Besides, each agent alternately performs a greedy choice among multiple parallel solutions to reduce the chance of falling into local optima and a variance adjustment mechanism to guide the search into a relatively good initial solution in a periodic manner. We give the proof of variance adjustment mechanism rationality and theoretical explanation of impact of greed among multiple parallel solutions. The experimental results show the superiority of VLSs over state-of-the-art DCOP algorithms.

2016 ◽  
Vol 2016 ◽  
pp. 1-7 ◽  
Author(s):  
Carolina Lagos ◽  
Guillermo Guerrero ◽  
Enrique Cabrera ◽  
Stefanie Niklander ◽  
Franklin Johnson ◽  
...  

A novel matheuristic approach is presented and tested on a well-known optimisation problem, namely, capacitated facility location problem (CFLP). The algorithm combines local search and mathematical programming. While the local search algorithm is used to select a subset of promising facilities, mathematical programming strategies are used to solve the subproblem to optimality. Proposed local search is influenced by instance-specific information such as installation cost and the distance between customers and facilities. The algorithm is tested on large instances of the CFLP, where neither local search nor mathematical programming is able to find good quality solutions within acceptable computational times. Our approach is shown to be a very competitive alternative to solve large-scale instances for the CFLP.


Author(s):  
Yanchen Deng ◽  
Runsheng Yu ◽  
Xinrun Wang ◽  
Bo An

Distributed constraint optimization problems (DCOPs) are a powerful model for multi-agent coordination and optimization, where information and controls are distributed among multiple agents by nature. Sampling-based algorithms are important incomplete techniques for solving medium-scale DCOPs. However, they use tables to exactly store all the information (e.g., costs, confidence bounds) to facilitate sampling, which limits their scalability. This paper tackles the limitation by incorporating deep neural networks in solving DCOPs for the first time and presents a neural-based sampling scheme built upon regret-matching. In the algorithm, each agent trains a neural network to approximate the regret related to its local problem and performs sampling according to the estimated regret. Furthermore, to ensure exploration we propose a regret rounding scheme that rounds small regret values to positive numbers. We theoretically show the regret bound of our algorithm and extensive evaluations indicate that our algorithm can scale up to large-scale DCOPs and significantly outperform the state-of-the-art methods.


2011 ◽  
Vol 26 (4) ◽  
pp. 411-444 ◽  
Author(s):  
Archie C. Chapman ◽  
Alex Rogers ◽  
Nicholas R. Jennings ◽  
David S. Leslie

AbstractDistributed constraint optimization problems (DCOPs) are important in many areas of computer science and optimization. In a DCOP, each variable is controlled by one of many autonomous agents, who together have the joint goal of maximizing a global objective function. A wide variety of techniques have been explored to solve such problems, and here we focus on one of the main families, namely iterative approximate best-response algorithms used as local search algorithms for DCOPs. We define these algorithms as those in which, at each iteration, agents communicate only the states of the variables under their control to their neighbours on the constraint graph, and that reason about their next state based on the messages received from their neighbours. These algorithms include the distributed stochastic algorithm and stochastic coordination algorithms, the maximum-gain messaging algorithms, the families of fictitious play and adaptive play algorithms, and algorithms that use regret-based heuristics. This family of algorithms is commonly employed in real-world systems, as they can be used in domains where communication is difficult or costly, where it is appropriate to trade timeliness off against optimality, or where hardware limitations render complete or more computationally intensive algorithms unusable. However, until now, no overarching framework has existed for analyzing this broad family of algorithms, resulting in similar and overlapping work being published independently in several different literatures. The main contribution of this paper, then, is the development of a unified analytical framework for studying such algorithms. This framework is built on our insight that when formulated as non-cooperative games, DCOPs form a subset of the class of potential games. This result allows us to prove convergence properties of iterative approximate best-response algorithms developed in the computer science literature using game-theoretic methods (which also shows that such algorithms can also be applied to the more general problem of finding Nash equilibria in potential games), and, conversely, also allows us to show that many game-theoretic algorithms can be used to solve DCOPs. By so doing, our framework can assist system designers by making the pros and cons of, and the synergies between, the various iterative approximate best-response DCOP algorithm components clear.


2011 ◽  
Vol 148-149 ◽  
pp. 1248-1251
Author(s):  
Xu Dong Wu

The iterated local search algorithm has been widely used in combinatorial optimization problems. A new fuel consumption objective for the vehicle routing problems was presented in this paper. A fuel consumption modal of the vehicle load is introduced and an improved iterated local search algorithm is used for the problem. An initial solution is generated by the Solomon I1 algorithm, and then the iterated local search algorithm is proposed for the fuel consumption optimization.


2011 ◽  
Vol 41 ◽  
pp. 407-444 ◽  
Author(s):  
A. György ◽  
L. Kocsis

Local search algorithms applied to optimization problems often suffer from getting trapped in a local optimum. The common solution for this deficiency is to restart the algorithm when no progress is observed. Alternatively, one can start multiple instances of a local search algorithm, and allocate computational resources (in particular, processing time) to the instances depending on their behavior. Hence, a multi-start strategy has to decide (dynamically) when to allocate additional resources to a particular instance and when to start new instances. In this paper we propose multi-start strategies motivated by works on multi-armed bandit problems and Lipschitz optimization with an unknown constant. The strategies continuously estimate the potential performance of each algorithm instance by supposing a convergence rate of the local search algorithm up to an unknown constant, and in every phase allocate resources to those instances that could converge to the optimum for a particular range of the constant. Asymptotic bounds are given on the performance of the strategies. In particular, we prove that at most a quadratic increase in the number of times the target function is evaluated is needed to achieve the performance of a local search algorithm started from the attraction region of the optimum. Experiments are provided using SPSA (Simultaneous Perturbation Stochastic Approximation) and k-means as local search algorithms, and the results indicate that the proposed strategies work well in practice, and, in all cases studied, need only logarithmically more evaluations of the target function as opposed to the theoretically suggested quadratic increase.


2009 ◽  
Vol 34 ◽  
pp. 61-88 ◽  
Author(s):  
A. Gershman ◽  
A. Meisels ◽  
R. Zivan

A new search algorithm for solving distributed constraint optimization problems (DisCOPs) is presented. Agents assign variables sequentially and compute bounds on partial assignments asynchronously. The asynchronous bounds computation is based on the propagation of partial assignments. The asynchronous forward-bounding algorithm (AFB) is a distributed optimization search algorithm that keeps one consistent partial assignment at all times. The algorithm is described in detail and its correctness proven. Experimental evaluation shows that AFB outperforms synchronous branch and bound by many orders of magnitude, and produces a phase transition as the tightness of the problem increases. This is an analogous effect to the phase transition that has been observed when local consistency maintenance is applied to MaxCSPs. The AFB algorithm is further enhanced by the addition of a backjumping mechanism, resulting in the AFB-BJ algorithm. Distributed backjumping is based on accumulated information on bounds of all values and on processing concurrently a queue of candidate goals for the next move back. The AFB-BJ algorithm is compared experimentally to other DisCOP algorithms (ADOPT, DPOP, OptAPO) and is shown to be a very efficient algorithm for DisCOPs.


Symmetry ◽  
2018 ◽  
Vol 10 (11) ◽  
pp. 633 ◽  
Author(s):  
Jinsheng Gao ◽  
Xiaomin Zhu ◽  
Anbang Liu ◽  
Qingyang Meng ◽  
Runtong Zhang

This paper shows the results of our study on the pick-and-place optimization problem. To solve this problem efficiently, an iterated hybrid local search algorithm (IHLS) which combines local search with integer programming is proposed. In the section of local search, the greedy algorithm with distance weight strategy and the convex-hull strategy is developed to determine the pick-and-place sequence; in the section of integer programming, an integer programming model is built to complete the feeder assignment problem. The experimental results show that the IHLS algorithm we proposed has high computational efficiency. Furthermore, compared with the genetic algorithm and the memetic algorithm, the IHLS is less time-consuming and more suitable in solving a large-scale problem.


2018 ◽  
Vol 2018 ◽  
pp. 1-17 ◽  
Author(s):  
Jun Wang ◽  
Pengcheng Luo ◽  
Xinwu Hu ◽  
Xiaonan Zhang

We propose a hybrid discrete grey wolf optimizer (HDGWO) in this paper to solve the weapon target assignment (WTA) problem, a kind of nonlinear integer programming problems. To make the original grey wolf optimizer (GWO), which was only developed for problems with a continuous solution space, available in the context, we first modify it by adopting a decimal integer encoding method to represent solutions (wolves) and presenting a modular position update method to update solutions in the discrete solution space. By this means, we acquire a discrete grey wolf optimizer (DGWO) and then through combining it with a local search algorithm (LSA), we obtain the HDGWO. Moreover, we also introduce specific domain knowledge into both the encoding method and the local search algorithm to compress the feasible solution space. Finally, we examine the feasibility of the HDGWO and the scalability of the HDGWO, respectively, by adopting it to solve a benchmark case and ten large-scale WTA problems. All of the running results are compared with those of a discrete particle swarm optimization (DPSO), a genetic algorithm with greedy eugenics (GAWGE), and an adaptive immune genetic algorithm (AIGA). The detailed analysis proves the feasibility of the HDGWO in solving the benchmark case and demonstrates its scalability in solving large-scale WTA problems.


Sign in / Sign up

Export Citation Format

Share Document