scholarly journals A Particle Swarm Based Algorithm for Functional Distributed Constraint Optimization Problems

2020 ◽  
Vol 34 (05) ◽  
pp. 7111-7118
Author(s):  
Moumita Choudhury ◽  
Saaduddin Mahmud ◽  
Md. Mosaddek Khan

Distributed Constraint Optimization Problems (DCOPs) are a widely studied constraint handling framework. The objective of a DCOP algorithm is to optimize a global objective function that can be described as the aggregation of several distributed constraint cost functions. In a DCOP, each of these functions is defined by a set of discrete variables. However, in many applications, such as target tracking or sleep scheduling in sensor networks, continuous valued variables are more suited than the discrete ones. Considering this, Functional DCOPs (F-DCOPs) have been proposed that can explicitly model a problem containing continuous variables. Nevertheless, state-of-the-art F-DCOPs approaches experience onerous memory or computation overhead. To address this issue, we propose a new F-DCOP algorithm, namely Particle Swarm based F-DCOP (PFD), which is inspired by a meta-heuristic, Particle Swarm Optimization (PSO). Although it has been successfully applied to many continuous optimization problems, the potential of PSO has not been utilized in F-DCOPs. To be exact, PFD devises a distributed method of solution construction while significantly reducing the computation and memory requirements. Moreover, we theoretically prove that PFD is an anytime algorithm. Finally, our empirical results indicate that PFD outperforms the state-of-the-art approaches in terms of solution quality and computation overhead.

Author(s):  
Yanchen Deng ◽  
Ziyu Chen ◽  
Dingding Chen ◽  
Wenxin Zhang ◽  
Xingqiong Jiang

Asymmetric distributed constraint optimization problems (ADCOPs) are an emerging model for coordinating agents with personal preferences. However, the existing inference-based complete algorithms which use local eliminations cannot be applied to ADCOPs, as the parent agents are required to transfer their private functions to their children. Rather than disclosing private functions explicitly to facilitate local eliminations, we solve the problem by enforcing delayed eliminations and propose AsymDPOP, the first inference-based complete algorithm for ADCOPs. To solve the severe scalability problems incurred by delayed eliminations, we propose to reduce the memory consumption by propagating a set of smaller utility tables instead of a joint utility table, and to reduce the computation efforts by sequential optimizations instead of joint optimizations. The empirical evaluation indicates that AsymDPOP significantly outperforms the state-of-the-art, as well as the vanilla DPOP with PEAV formulation.


2020 ◽  
Vol 34 (05) ◽  
pp. 7087-7094
Author(s):  
Dingding Chen ◽  
Yanchen Deng ◽  
Ziyu Chen ◽  
Wenxing Zhang ◽  
Zhongshi He

Search and inference are two main strategies for optimally solving Distributed Constraint Optimization Problems (DCOPs). Recently, several algorithms were proposed to combine their advantages. Unfortunately, such algorithms only use an approximated inference as a one-shot preprocessing phase to construct the initial lower bounds which lead to inefficient pruning under the limited memory budget. On the other hand, iterative inference algorithms (e.g., MB-DPOP) perform a context-based complete inference for all possible contexts but suffer from tremendous traffic overheads. In this paper, (i) hybridizing search with context-based inference, we propose a complete algorithm for DCOPs, named HS-CAI where the inference utilizes the contexts derived from the search process to establish tight lower bounds while the search uses such bounds for efficient pruning and thereby reduces contexts for the inference. Furthermore, (ii) we introduce a context evaluation mechanism to select the context patterns for the inference to further reduce the overheads incurred by iterative inferences. Finally, (iii) we prove the correctness of our algorithm and the experimental results demonstrate its superiority over the state-of-the-art.


2020 ◽  
Author(s):  
Jesús Cerquides ◽  
Juan Antonio Rodríguez-Aguilar ◽  
Rémi Emonet ◽  
Gauthier Picard

Abstract In the context of solving large distributed constraint optimization problems, belief-propagation and incomplete inference algorithms are candidates of choice. However, in general, when the problem structure is very cyclic, these solution methods suffer from bad performance, due to non-convergence and many exchanged messages. As to improve performances of the MaxSum inference algorithm when solving cyclic constraint optimization problems, we propose here to take inspiration from the belief-propagation-guided decimation used to solve sparse random graphs ($k$-satisfiability). We propose the novel DeciMaxSum method, which is parameterized in terms of policies to decide when to trigger decimation, which variables to decimate and which values to assign to decimated variables. Based on an empirical evaluation on a classical constraint optimization benchmarks (graph coloring, random graph and Ising model), some of these combinations of policies, using periodic decimation, cycle detection-based decimation, parallel and non parallel decimation, random or deterministic variable selection and deterministic or random sampling for value selection, outperform state-of-the-art competitors in many settings.


2020 ◽  
Vol 34 (05) ◽  
pp. 7333-7340
Author(s):  
Roie Zivan ◽  
Omer Lev ◽  
Rotem Galiki

Belief propagation, an algorithm for solving problems represented by graphical models, has long been known to converge to the optimal solution when the graph is a tree. When the graph representing the problem includes a single cycle, the algorithm either converges to the optimal solution or performs periodic oscillations. While the conditions that trigger these two behaviors have been established, the question regarding the convergence and divergence of the algorithm on graphs that include more than one cycle is still open.Focusing on Max-sum, the version of belief propagation for solving distributed constraint optimization problems (DCOPs), we extend the theory on the behavior of belief propagation in general – and Max-sum specifically – when solving problems represented by graphs with multiple cycles. This includes: 1) Generalizing the results obtained for graphs with a single cycle to graphs with multiple cycles, by using backtrack cost trees (BCT). 2) Proving that when the algorithm is applied to adjacent symmetric cycles, the use of a large enough damping factor guarantees convergence to the optimal solution.


Author(s):  
Tiep Le ◽  
Tran Cao Son ◽  
Enrico Pontelli

This paper proposes Multi-context System for Optimization Problems (MCS-OP) by introducing conditional costassignment bridge rules to Multi-context Systems (MCS). This novel feature facilitates the definition of a preorder among equilibria, based on the total incurred cost of applied bridge rules. As an application of MCS-OP, the paper describes how MCS-OP can be used in modeling Distributed Constraint Optimization Problems (DCOP), a prominent class of distributed optimization problems that is frequently employed in multi-agent system (MAS) research. The paper shows, by means of an example, that MCS-OP is more expressive than DCOP, and hence, could potentially be useful in modeling distributed optimization problems which cannot be easily dealt with using DCOPs. It also contains a complexity analysis of MCS-OP.


2017 ◽  
Vol 2017 ◽  
pp. 1-25 ◽  
Author(s):  
Ahmad Wedyan ◽  
Jacqueline Whalley ◽  
Ajit Narayanan

A new nature-inspired optimization algorithm called the Hydrological Cycle Algorithm (HCA) is proposed based on the continuous movement of water in nature. In the HCA, a collection of water drops passes through various hydrological water cycle stages, such as flow, evaporation, condensation, and precipitation. Each stage plays an important role in generating solutions and avoiding premature convergence. The HCA shares information by direct and indirect communication among the water drops, which improves solution quality. Similarities and differences between HCA and other water-based algorithms are identified, and the implications of these differences on overall performance are discussed. A new topological representation for problems with a continuous domain is proposed. In proof-of-concept experiments, the HCA is applied on a variety of benchmarked continuous numerical functions. The results were found to be competitive in comparison to a number of other algorithms and validate the effectiveness of HCA. Also demonstrated is the ability of HCA to escape from local optima solutions and converge to global solutions. Thus, HCA provides an alternative approach to tackling various types of multimodal continuous optimization problems as well as an overall framework for water-based particle algorithms in general.


Sign in / Sign up

Export Citation Format

Share Document