scholarly journals Decomposing Constraint Satisfaction Problems by Means of Meta Constraint Satisfaction Optimization Problems

Author(s):  
Sven Löffler ◽  
Ke Liu ◽  
Petra Hofstedt
2021 ◽  
Vol 3 ◽  
Author(s):  
Jan Tönshoff ◽  
Martin Ritzert ◽  
Hinrikus Wolf ◽  
Martin Grohe

Many combinatorial optimization problems can be phrased in the language of constraint satisfaction problems. We introduce a graph neural network architecture for solving such optimization problems. The architecture is generic; it works for all binary constraint satisfaction problems. Training is unsupervised, and it is sufficient to train on relatively small instances; the resulting networks perform well on much larger instances (at least 10-times larger). We experimentally evaluate our approach for a variety of problems, including Maximum Cut and Maximum Independent Set. Despite being generic, we show that our approach matches or surpasses most greedy and semi-definite programming based algorithms and sometimes even outperforms state-of-the-art heuristics for the specific problems.


Author(s):  
Roman Barták

Constraints appear in many areas of human endeavour starting from puzzles like crosswords (the words can only overlap at the same letter) and recently popular Sudoku (no number appears twice in a row) through everyday problems such as planning a meeting (the meeting room must accommodate all participants) till solving hard optimization problems for example in manufacturing scheduling (a job must finish before another job). Though all these problems look like being from completely different worlds, they all share a similar base – the task is to find values of decision variables, such as the start time of the job or the position of the number at a board, respecting given constraints. This problem is called a Constraint Satisfaction Problem (CSP). Constraint processing emerged from AI research in 1970s (Montanary, 1974) when problems such as scene labelling were studied (Waltz, 1975). The goal of scene labelling was to recognize a type of line (and then a type of object) in the 2D picture of a 3D scene. The possible types were convex, concave, and occluding lines and the combination of types was restricted at junctions of lines to be physically feasible. This scene labelling problem is probably the first problem formalised as a CSP and some techniques developed for solving this problem, namely arc consistency, are still in the core of constraint processing. Systematic use of constraints in programming systems has started in 1980s when researchers identified a similarity between unification in logic programming and constraint satisfaction (Gallaire, 1985) (Jaffar & Lassez, 1987). Constraint Logic Programming was born. Today Constraint Programming is a separate subject independent of the underlying programming language, though constraint logic programming still plays a prominent role thanks to natural integration of constraints into a logic programming framework. This article presents mainstream techniques for solving constraint satisfaction problems. These techniques stay behind the existing constraint solvers and their understanding is important to exploit fully the available technology.


AI Magazine ◽  
2012 ◽  
Vol 33 (3) ◽  
pp. 53 ◽  
Author(s):  
William Yeoh ◽  
Makoto Yokoo

Distributed problem solving is a subfield within multiagent systems, where agents are assumed to be part of a team and collaborate with each other to reach a common goal. In this article, we illustrate the motivations for distributed problem solving and provide an overview of two distributed problem solving models, namely distributed constraint satisfaction problems (DCSPs) and distributed constraint optimization problems (DCOPs), and some of their algorithms.


2002 ◽  
Vol 11 (03) ◽  
pp. 425-436 ◽  
Author(s):  
MOHAMED TOUNSI ◽  
PHILIPPE DAVID

In this paper we introduce a new method based on Russian Doll Search (RDS) for solving optimization problems expressed as Valued Constraint Satisfaction Problems (VCSPs). The RDS method solves problems of size n (where n is the number of variables) by replacing one search by n successive searches on nested subproblems using the results of each search to produce a better lower bound. The main idea of our method is to introduce the variables through the successive searches not one by one but by sets of k variables. We present two variants of our method: the first one where the number k is fixed, noted kfRDS; the second one, kvRDS, where k can be variable. Finally, we show that our method improves RDS on daily management of an earth observation satellite.


1999 ◽  
Vol 08 (04) ◽  
pp. 363-383 ◽  
Author(s):  
PETER STUCKEY ◽  
VINCENT TAM

Hard or large-scale constraint satisfaction and optimization problems, occur widely in artificial intelligence and operations research. These problems are often difficult to solve with global search methods, but many of them can be efficiently solved by local search methods. Evolutionary algorithms are local search methods which have considerable success in tackling difficult, or ill-defined optimization problems. In contrast they have not been so successful in tackling constraint satisfaction problems. Other local search methods, in particular GENET and EGENET are designed specifically for constraint satisfaction problems, and have demonstrated remarkable success in solving hard examples of these problems. In this paper we examine how we can transfer the mechanisms that were so successful in (E)GENET to evolutionary algorithms, in order to tackle constraint satisfaction algorithms efficiently. An empirical comparison of our evolutionary algorithm improved by mechanisms from EGENET and shows how it can markedly improve on the efficiency of EGENET in solving certain hard instances of constraint satisfaction problems.


2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
N. Bouhmala

The constraint satisfaction problem (CSP) is a popular used paradigm to model a wide spectrum of optimization problems in artificial intelligence. This paper presents a fast metaheuristic for solving binary constraint satisfaction problems. The method can be classified as a variable depth search metaheuristic combining a greedy local search using a self-adaptive weighting strategy on the constraint weights. Several metaheuristics have been developed in the past using various penalty weight mechanisms on the constraints. What distinguishes the proposed metaheuristic from those developed in the past is the update ofkvariables during each iteration when moving from one assignment of values to another. The benchmark is based on hard random constraint satisfaction problems enjoying several features that make them of a great theoretical and practical interest. The results show that the proposed metaheuristic is capable of solving hard unsolved problems that still remain a challenge for both complete and incomplete methods. In addition, the proposed metaheuristic is remarkably faster than all existing solvers when tested on previously solved instances. Finally, its distinctive feature contrary to other metaheuristics is the absence of parameter tuning making it highly suitable in practical scenarios.


2016 ◽  
Vol 2016 ◽  
pp. 1-13 ◽  
Author(s):  
Paweł Sitek ◽  
Jarosław Wikarek

This paper proposes a hybrid programming framework for modeling and solving of constraint satisfaction problems (CSPs) and constraint optimization problems (COPs). Two paradigms, CLP (constraint logic programming) and MP (mathematical programming), are integrated in the framework. The integration is supplemented with the original method of problem transformation, used in the framework as a presolving method. The transformation substantially reduces the feasible solution space. The framework automatically generates CSP and COP models based on current values of data instances, questions asked by a user, and set of predicates and facts of the problem being modeled, which altogether constitute a knowledge database for the given problem. This dynamic generation of dedicated models, based on the knowledge base, together with the parameters changing externally, for example, the user’s questions, is the implementation of the autonomous search concept. The models are solved using the internal or external solvers integrated with the framework. The architecture of the framework as well as its implementation outline is also included in the paper. The effectiveness of the framework regarding the modeling and solution search is assessed through the illustrative examples relating to scheduling problems with additional constrained resources.


Author(s):  
Chun-Yan Zhao ◽  
Yan-Rong Fu ◽  
Jin-Hua Zhao

Abstract Message passing algorithms, whose iterative nature captures well complicated interactions among interconnected variables in complex systems and extracts information from the fixed point of iterated messages, provide a powerful toolkit in tackling hard computational tasks in optimization, inference, and learning problems. In the context of constraint satisfaction problems (CSPs), when a control parameter (such as constraint density) is tuned, multiple threshold phenomena emerge, signaling fundamental structural transitions in their solution space. Finding solutions around these transition points is exceedingly challenging for algorithm design, where message passing algorithms suffer from a large message fluctuation far from convergence. Here we introduce a residual-based updating step into message passing algorithms, in which messages varying large between consecutive steps are given a high priority in updating process. For the specific example of model RB, a typical prototype of random CSPs with growing domains, we show that our algorithm improves the convergence of message updating and increases the success probability in finding solutions around the satisfiability threshold with a low computational cost. Our approach to message passing algorithms should be of value for exploring their power in developing algorithms to find ground-state solutions and understand the detailed structure of solution space of hard optimization problems.


Author(s):  
Lenka Zdeborová

Statistical physics of hard optimization problemsOptimization is fundamental in many areas of science, from computer science and information theory to engineering and statistical physics, as well as to biology or social sciences. It typically involves a large number of variables and a cost function depending on these variables. Optimization problems in the non-deterministic polynomial (NP)-complete class are particularly difficult, it is believed that the number of operations required to minimize the cost function is in the most difficult cases exponential in the system size. However, even in an NP-complete problem the practically arising instances might, in fact, be easy to solve. The principal question we address in this article is: How to recognize if an NP-complete constraint satisfaction problem is typically hard and what are the main reasons for this? We adopt approaches from the statistical physics of disordered systems, in particular the cavity method developed originally to describe glassy systems. We describe new properties of the space of solutions in two of the most studied constraint satisfaction problems - random satisfiability and random graph coloring. We suggest a relation between the existence of the so-called frozen variables and the algorithmic hardness of a problem. Based on these insights, we introduce a new class of problems which we named "locked" constraint satisfaction, where the statistical description is easily solvable, but from the algorithmic point of view they are even more challenging than the canonical satisfiability.


Sign in / Sign up

Export Citation Format

Share Document