scholarly journals Graph Neural Networks for Maximum Constraint Satisfaction

2021 ◽  
Vol 3 ◽  
Author(s):  
Jan Tönshoff ◽  
Martin Ritzert ◽  
Hinrikus Wolf ◽  
Martin Grohe

Many combinatorial optimization problems can be phrased in the language of constraint satisfaction problems. We introduce a graph neural network architecture for solving such optimization problems. The architecture is generic; it works for all binary constraint satisfaction problems. Training is unsupervised, and it is sufficient to train on relatively small instances; the resulting networks perform well on much larger instances (at least 10-times larger). We experimentally evaluate our approach for a variety of problems, including Maximum Cut and Maximum Independent Set. Despite being generic, we show that our approach matches or surpasses most greedy and semi-definite programming based algorithms and sometimes even outperforms state-of-the-art heuristics for the specific problems.

2018 ◽  
Vol 30 (5) ◽  
pp. 1359-1393 ◽  
Author(s):  
Ueli Rutishauser ◽  
Jean-Jacques Slotine ◽  
Rodney J. Douglas

Finding actions that satisfy the constraints imposed by both external inputs and internal representations is central to decision making. We demonstrate that some important classes of constraint satisfaction problems (CSPs) can be solved by networks composed of homogeneous cooperative-competitive modules that have connectivity similar to motifs observed in the superficial layers of neocortex. The winner-take-all modules are sparsely coupled by programming neurons that embed the constraints onto the otherwise homogeneous modular computational substrate. We show rules that embed any instance of the CSP's planar four-color graph coloring, maximum independent set, and sudoku on this substrate and provide mathematical proofs that guarantee these graph coloring problems will convergence to a solution. The network is composed of nonsaturating linear threshold neurons. Their lack of right saturation allows the overall network to explore the problem space driven through the unstable dynamics generated by recurrent excitation. The direction of exploration is steered by the constraint neurons. While many problems can be solved using only linear inhibitory constraints, network performance on hard problems benefits significantly when these negative constraints are implemented by nonlinear multiplicative inhibition. Overall, our results demonstrate the importance of instability rather than stability in network computation and offer insight into the computational role of dual inhibitory mechanisms in neural circuits.


Author(s):  
Vyacheslav Korolyov ◽  
Oleksandr Khodzinskyi

Introduction. Quantum computers provide several times faster solutions to several NP-hard combinatorial optimization problems in comparison with computing clusters. The trend of doubling the number of qubits of quantum computers every year suggests the existence of an analog of Moore's law for quantum computers, which means that soon they will also be able to get a significant acceleration of solving many applied large-scale problems. The purpose of the article is to review methods for creating algorithms of quantum computer mathematics for combinatorial optimization problems and to analyze the influence of the qubit-to-qubit coupling and connections strength on the performance of quantum data processing. Results. The article offers approaches to the classification of algorithms for solving these problems from the perspective of quantum computer mathematics. It is shown that the number and strength of connections between qubits affect the dimensionality of problems solved by algorithms of quantum computer mathematics. It is proposed to consider two approaches to calculating combinatorial optimization problems on quantum computers: universal, using quantum gates, and specialized, based on a parameterization of physical processes. Examples of constructing a half-adder for two qubits of an IBM quantum processor and an example of solving the problem of finding the maximum independent set for the IBM and D-wave quantum computers are given. Conclusions. Today, quantum computers are available online through cloud services for research and commercial use. At present, quantum processors do not have enough qubits to replace semiconductor computers in universal computing. The search for a solution to a combinatorial optimization problem is performed by achieving the minimum energy of the system of coupled qubits, on which the task is mapped, and the data are the initial conditions. Approaches to solving combinatorial optimization problems on quantum computers are considered and the results of solving the problem of finding the maximum independent set on the IBM and D-wave quantum computers are given. Keywords: quantum computer, quantum computer mathematics, qubit, maximal independent set for a graph.


2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
N. Bouhmala

The constraint satisfaction problem (CSP) is a popular used paradigm to model a wide spectrum of optimization problems in artificial intelligence. This paper presents a fast metaheuristic for solving binary constraint satisfaction problems. The method can be classified as a variable depth search metaheuristic combining a greedy local search using a self-adaptive weighting strategy on the constraint weights. Several metaheuristics have been developed in the past using various penalty weight mechanisms on the constraints. What distinguishes the proposed metaheuristic from those developed in the past is the update ofkvariables during each iteration when moving from one assignment of values to another. The benchmark is based on hard random constraint satisfaction problems enjoying several features that make them of a great theoretical and practical interest. The results show that the proposed metaheuristic is capable of solving hard unsolved problems that still remain a challenge for both complete and incomplete methods. In addition, the proposed metaheuristic is remarkably faster than all existing solvers when tested on previously solved instances. Finally, its distinctive feature contrary to other metaheuristics is the absence of parameter tuning making it highly suitable in practical scenarios.


Author(s):  
Quentin Cappart ◽  
Emmanuel Goutierre ◽  
David Bergman ◽  
Louis-Martin Rousseau

Finding tight bounds on the optimal solution is a critical element of practical solution methods for discrete optimization problems. In the last decade, decision diagrams (DDs) have brought a new perspective on obtaining upper and lower bounds that can be significantly better than classical bounding mechanisms, such as linear relaxations. It is well known that the quality of the bounds achieved through this flexible bounding method is highly reliant on the ordering of variables chosen for building the diagram, and finding an ordering that optimizes standard metrics is an NP-hard problem. In this paper, we propose an innovative and generic approach based on deep reinforcement learning for obtaining an ordering for tightening the bounds obtained with relaxed and restricted DDs. We apply the approach to both the Maximum Independent Set Problem and the Maximum Cut Problem. Experimental results on synthetic instances show that the deep reinforcement learning approach, by achieving tighter objective function bounds, generally outperforms ordering methods commonly used in the literature when the distribution of instances is known. To the best knowledge of the authors, this is the first paper to apply machine learning to directly improve relaxation bounds obtained by general-purpose bounding mechanisms for combinatorial optimization problems.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Yaoxin Li ◽  
Jing Liu ◽  
Guozheng Lin ◽  
Yueyuan Hou ◽  
Muyun Mou ◽  
...  

AbstractIn computer science, there exist a large number of optimization problems defined on graphs, that is to find a best node state configuration or a network structure, such that the designed objective function is optimized under some constraints. However, these problems are notorious for their hardness to solve, because most of them are NP-hard or NP-complete. Although traditional general methods such as simulated annealing (SA), genetic algorithms (GA), and so forth have been devised to these hard problems, their accuracy and time consumption are not satisfying in practice. In this work, we proposed a simple, fast, and general algorithm framework based on advanced automatic differentiation technique empowered by deep learning frameworks. By introducing Gumbel-softmax technique, we can optimize the objective function directly by gradient descent algorithm regardless of the discrete nature of variables. We also introduce evolution strategy to parallel version of our algorithm. We test our algorithm on four representative optimization problems on graph including modularity optimization from network science, Sherrington–Kirkpatrick (SK) model from statistical physics, maximum independent set (MIS) and minimum vertex cover (MVC) problem from combinatorial optimization on graph, and Influence Maximization problem from computational social science. High-quality solutions can be obtained with much less time-consuming compared to the traditional approaches.


Sign in / Sign up

Export Citation Format

Share Document