constraint optimization problems
Recently Published Documents


TOTAL DOCUMENTS

93
(FIVE YEARS 33)

H-INDEX

13
(FIVE YEARS 2)

Author(s):  
Yanchen Deng ◽  
Runsheng Yu ◽  
Xinrun Wang ◽  
Bo An

Distributed constraint optimization problems (DCOPs) are a powerful model for multi-agent coordination and optimization, where information and controls are distributed among multiple agents by nature. Sampling-based algorithms are important incomplete techniques for solving medium-scale DCOPs. However, they use tables to exactly store all the information (e.g., costs, confidence bounds) to facilitate sampling, which limits their scalability. This paper tackles the limitation by incorporating deep neural networks in solving DCOPs for the first time and presents a neural-based sampling scheme built upon regret-matching. In the algorithm, each agent trains a neural network to approximate the regret related to its local problem and performs sampling according to the estimated regret. Furthermore, to ensure exploration we propose a regret rounding scheme that rounds small regret values to positive numbers. We theoretically show the regret bound of our algorithm and extensive evaluations indicate that our algorithm can scale up to large-scale DCOPs and significantly outperform the state-of-the-art methods.


Author(s):  
Tarun Gangwar ◽  
Dominik Schillinger

AbstractWe present a concurrent material and structure optimization framework for multiphase hierarchical systems that relies on homogenization estimates based on continuum micromechanics to account for material behavior across many different length scales. We show that the analytical nature of these estimates enables material optimization via a series of inexpensive “discretization-free” constraint optimization problems whose computational cost is independent of the number of hierarchical scales involved. To illustrate the strength of this unique property, we define new benchmark tests with several material scales that for the first time become computationally feasible via our framework. We also outline its potential in engineering applications by reproducing self-optimizing mechanisms in the natural hierarchical system of bamboo culm tissue.


Author(s):  
Muhammad Farhan Tabassum ◽  
Sana Akram ◽  
Saadia Mahmood-ul-Hassan ◽  
Rabia Karim ◽  
Parvaiz Ahmad Naik ◽  
...  

Optimization for all disciplines is very important and applicable. Optimization has played a key role in practical engineering problems. A novel hybrid meta-heuristic optimization algorithm that is based on Differential Evolution (DE), Gradient Evolution (GE) and Jumping Technique named Differential Gradient Evolution Plus (DGE+) are presented in this paper. The proposed algorithm hybridizes the above-mentioned algorithms with the help of an improvised dynamic probability distribution, additionally provides a new shake off method to avoid premature convergence towards local minima. To evaluate the efficiency, robustness, and reliability of DGE+ it has been applied on seven benchmark constraint problems, the results of comparison revealed that the proposed algorithm can provide very compact, competitive and promising performance.


Mathematics ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 802
Author(s):  
Andreea Bejenaru

This paper begins with a geometric statement of constraint optimization problems, which include both equality and inequality-type restrictions. The cost to optimize is a curvilinear functional defined by a given differential one-form, while the optimal state to be determined is a differential curve connecting two given points, among all the curves satisfying some given primal feasibility conditions. The resulting outcome is an invariant curvilinear Fritz–John maximum principle. Afterward, this result is approached by means of parametric equations. The classical single-time Pontryagin maximum principle for curvilinear cost functionals is revealed as a consequence.


2020 ◽  
Author(s):  
Jesús Cerquides ◽  
Juan Antonio Rodríguez-Aguilar ◽  
Rémi Emonet ◽  
Gauthier Picard

Abstract In the context of solving large distributed constraint optimization problems, belief-propagation and incomplete inference algorithms are candidates of choice. However, in general, when the problem structure is very cyclic, these solution methods suffer from bad performance, due to non-convergence and many exchanged messages. As to improve performances of the MaxSum inference algorithm when solving cyclic constraint optimization problems, we propose here to take inspiration from the belief-propagation-guided decimation used to solve sparse random graphs ($k$-satisfiability). We propose the novel DeciMaxSum method, which is parameterized in terms of policies to decide when to trigger decimation, which variables to decimate and which values to assign to decimated variables. Based on an empirical evaluation on a classical constraint optimization benchmarks (graph coloring, random graph and Ising model), some of these combinations of policies, using periodic decimation, cycle detection-based decimation, parallel and non parallel decimation, random or deterministic variable selection and deterministic or random sampling for value selection, outperform state-of-the-art competitors in many settings.


Author(s):  
Anuraganand Sharma

Single-objective bilevel optimization is a specialized form of constraint optimization problems where one of the constraints is an optimization problem itself. These problems are typically non-convex and strongly NP-Hard. Recently, there has been an increased interest from the evolutionary computation community to model bilevel problems due to its applicability in real-world applications for decision-making problems. In this work, a partial nested evolutionary approach with a local heuristic search has been proposed to solve the benchmark problems and have outstanding results. This approach relies on the concept of intermarriage-crossover in search of feasible regions by exploiting information from the constraints. A new variant has also been proposed to the commonly used convergence approaches, i.e., optimistic and pessimistic. It is called an extreme optimistic approach. The experimental results demonstrate the algorithm converges differently to known optimum solutions with the optimistic variants. Optimistic approach also outperforms pessimistic approach. Comparative statistical analysis of our approach with other recently published partial to complete evolutionary approaches demonstrates very competitive results.


Author(s):  
Xavier Gillard ◽  
Pierre Schaus ◽  
Vianney Coppé

This paper presents ddo, a generic and efficient library to solve constraint optimization problems with decision diagrams. To that end, our framework implements the branch-and-bound approach which has recently been introduced by Bergman et al., (2016) to solve dynamic programs to optimality. Our library allowed us to successfully reproduce the results of Bergman et al. for MISP, MCP and MAX2SAT while using a single generic library. As an additional benefit, our ddo library is able to exploit parallel computing for its purpose without imposing any constraint on the user (apart from memory safety). Ddo is released as an open source rust library (crate) alongside with its companion example programs to solve the aforementioned problems. To the best of our knowledge, this is the first public implementation of a generic library to solve combinatorial optimization problems with branch-and-bound MDD.


Author(s):  
Saaduddin Mahmud ◽  
Md. Mosaddek Khan ◽  
Moumita Choudhury ◽  
Long Tran-Thanh ◽  
Nicholas R. Jennings

Distributed Constraint Optimization Problems (DCOPs) are an important framework for modeling coordinated decision-making problems in multi-agent systems with a set of discrete variables. Later works have extended DCOPs to model problems with a set of continuous variables, named Functional DCOPs (F-DCOPs). In this paper, we combine both of these frameworks into the Mixed Integer Functional DCOP (MIF-DCOP) framework that can deal with problems regardless of their variables' type. We then propose a novel algorithm - Distributed Parallel Simulated Annealing (DPSA), where agents cooperatively learn the optimal parameter configuration for the algorithm while also solving the given problem using the learned knowledge. Finally, we empirically evaluate our approach in DCOP, F-DCOP, and MIF-DCOP settings and show that DPSA produces solutions of significantly better quality than the state-of-the-art non-exact algorithms in their corresponding settings.


Sign in / Sign up

Export Citation Format

Share Document