scholarly journals Integer Programming, Constraint Programming, and Hybrid Decomposition Approaches to Discretizable Distance Geometry Problems

Author(s):  
Moira MacNeil ◽  
Merve Bodur

Given an integer dimension K and a simple, undirected graph G with positive edge weights, the Distance Geometry Problem (DGP) aims to find a realization function mapping each vertex to a coordinate in [Formula: see text] such that the distance between pairs of vertex coordinates is equal to the corresponding edge weights in G. The so-called discretization assumptions reduce the search space of the realization to a finite discrete one, which can be explored via the branch-and-prune (BP) algorithm. Given a discretization vertex order in G, the BP algorithm constructs a binary tree where the nodes at a layer provide all possible coordinates of the vertex corresponding to that layer. The focus of this paper is on finding optimal BP trees for a class of discretizable DGPs. More specifically, we aim to find a discretization vertex order in G that yields a BP tree with the least number of branches. We propose an integer programming formulation and three constraint programming formulations that all significantly outperform the state-of-the-art cutting-plane algorithm for this problem. Moreover, motivated by the difficulty in solving instances with a large and low-density input graph, we develop two hybrid decomposition algorithms, strengthened by a set of valid inequalities, which further improve the solvability of the problem. Summary of Contribution: We present a new model to solve a combinatorial optimization problem on graphs, MIN DOUBLE, which comes from the highly active area of distance geometry and has applications in a wide variety of fields. We use integer programming (IP) and present the first constraint programming (CP) models and hybrid decomposition methods, implemented as a branch-and-cut procedure, for MIN DOUBLE. Through an extensive computational study, we show that our approaches advance the state of the art for MIN DOUBLE. We accomplish this by not only combining generic techniques from IP and CP but also exploring the structure of the problem in developing valid inequalities and variable fixing rules. Our methods significantly improve the solvability of MIN DOUBLE, which we believe can also provide insights for tackling other problem classes and applications.

2020 ◽  
Vol 8 (3-4) ◽  
pp. 205-240
Author(s):  
Patrick Gemander ◽  
Wei-Kun Chen ◽  
Dieter Weninger ◽  
Leona Gottwald ◽  
Ambros Gleixner ◽  
...  

Abstract In state-of-the-art mixed-integer programming solvers, a large array of reduction techniques are applied to simplify the problem and strengthen the model formulation before starting the actual branch-and-cut phase. Despite their mathematical simplicity, these methods can have significant impact on the solvability of a given problem. However, a crucial property for employing presolve techniques successfully is their speed. Hence, most methods inspect constraints or variables individually in order to guarantee linear complexity. In this paper, we present new hashing-based pairing mechanisms that help to overcome known performance limitations of more powerful presolve techniques that consider pairs of rows or columns. Additionally, we develop an enhancement to one of these presolve techniques by exploiting the presence of set-packing structures on binary variables in order to strengthen the resulting reductions without increasing runtime. We analyze the impact of these methods on the MIPLIB 2017 benchmark set based on an implementation in the MIP solver SCIP.


Author(s):  
Edward Lam ◽  
Pierre Le Bodic ◽  
Daniel D. Harabor ◽  
Peter J. Stuckey

There are currently two broad strategies for optimal Multi-agent Pathfinding (MAPF): (1) search-based methods, which model and solve MAPF directly, and (2) compilation-based solvers, which reduce MAPF to instances of well-known combinatorial problems, and thus, can benefit from advances in solver techniques. In this work, we present an optimal algorithm, BCP, that hybridizes both approaches using Branch-and-Cut-and-Price, a decomposition framework developed for mathematical optimization. We formalize BCP and compare it empirically against CBSH and CBSH-RM, two leading search-based solvers. Conclusive results on standard benchmarks indicate that its performance exceeds the state-of-the-art: solving more instances on smaller grids and scaling reliably to 100 or more agents on larger game maps.


Author(s):  
Lars Kotthoff ◽  
Alexandre Fréchette ◽  
Tomasz Michalak ◽  
Talal Rahwan ◽  
Holger H. Hoos ◽  
...  

Assessing the progress made in AI and contributions to the state of the art is of major concern to the community. Recently, Frechette et al. [2016] advocated performing such analysis via the Shapley value, a concept from coalitional game theory. In this paper, we argue that while this general idea is sound, it unfairly penalizes older algorithms that advanced the state of the art when introduced, but were then outperformed by modern counterparts. Driven by this observation, we introduce the temporal Shapley value, a measure that addresses this problem while maintaining the desirable properties of the (classical) Shapley value. We use the tempo- ral Shapley value to analyze the progress made in (i) the different versions of the Quicksort algorithm; (ii) the annual SAT competitions 2007–2014; (iii) an annual competition of Constraint Programming, namely the MiniZinc challenge 2014–2016. Our analysis reveals novel insights into the development made in these important areas of research over time.


2021 ◽  
Vol 17 (4) ◽  
pp. 1-26
Author(s):  
Dharanidhar Dang ◽  
Sai Vineel Reddy Chittamuru ◽  
Sudeep Pasricha ◽  
Rabi Mahapatra ◽  
Debashis Sahoo

Training deep learning networks involves continuous weight updates across the various layers of the deep network while using a backpropagation (BP) algorithm. This results in expensive computation overheads during training. Consequently, most deep learning accelerators today employ pretrained weights and focus only on improving the design of the inference phase. The recent trend is to build a complete deep learning accelerator by incorporating the training module. Such efforts require an ultra-fast chip architecture for executing the BP algorithm. In this article, we propose a novel photonics-based backpropagation accelerator for high-performance deep learning training. We present the design for a convolutional neural network (CNN), BPLight-CNN , which incorporates the silicon photonics-based backpropagation accelerator. BPLight-CNN is a first-of-its-kind photonic and memristor-based CNN architecture for end-to-end training and prediction. We evaluate BPLight-CNN using a photonic CAD framework (IPKISS) on deep learning benchmark models, including LeNet and VGG-Net. The proposed design achieves (i) at least 34× speedup, 34× improvement in computational efficiency, and 38.5× energy savings during training; and (ii) 29× speedup, 31× improvement in computational efficiency, and 38.7× improvement in energy savings during inference compared with the state-of-the-art designs. All of these comparisons are done at a 16-bit resolution, and BPLight-CNN achieves these improvements at a cost of approximately 6% lower accuracy compared with the state-of-the-art.


OR Spectrum ◽  
2021 ◽  
Author(s):  
Markus Sinnl

AbstractIn this paper, we study the recently introduced time-constrained maximal covering routing problem. In this problem, we are given a central depot, a set of facilities, and a set of customers. Each customer is associated with a subset of the facilities which can cover it. A feasible solution consists of k Hamiltonian cycles on subsets of the facilities and the central depot. Each cycle must contain the depot and must respect a given distance limit. The goal is to maximize the number of customers covered by facilities contained in the cycles. We develop two exact solution algorithms for the problem based on new mixed-integer programming models. One algorithm is based on a compact model, while the other model contains an exponential number of constraints, which are separated on-the-fly, i.e., we use branch-and-cut. We also describe preprocessing techniques, valid inequalities and primal heuristics for both models. We evaluate our solution approaches on the instances from literature and our algorithms are able to find the provably optimal solution for 267 out of 270 instances, including 123 instances, for which the optimal solution was not known before. Moreover, for most of the instances, our algorithms only take a few seconds, and thus are up to five magnitudes faster than previous approaches. Finally, we also discuss some issues with the instances from literature and present some new instances.


2021 ◽  
pp. 1-31
Author(s):  
Junhao Huang ◽  
Weize Sun ◽  
Lei Huang

This work addresses the problem of network pruning and proposes a novel joint training method based on a multiobjective optimization model. Most of the state-of-the-art pruning methods rely on user experience for selecting the sparsity ratio of the weight matrices or tensors, and thus suffer from severe performance reduction with inappropriate user-defined parameters. Moreover, networks might be inferior due to the inefficient connecting architecture search, especially when it is highly sparse. It is revealed in this work that the network model might maintain sparse characteristic in the early stage of the backpropagation (BP) training process, and evolutionary computation-based algorithms can accurately discover the connecting architecture with satisfying network performance. In particular, we establish a multiobjective sparse model for network pruning and propose an efficient approach that combines BP training and two modified multiobjective evolutionary algorithms (MOEAs). The BP algorithm converges quickly, and the two MOEAs can search for the optimal sparse structure and refine the weights, respectively. Experiments are also included to prove the benefits of the proposed algorithm. We show that the proposed method can obtain a desired Pareto front (PF), leading to a better pruning result comparing to the state-of-the-art methods, especially when the network structure is highly sparse.


Author(s):  
Daniel Rehfeldt ◽  
Thorsten Koch

AbstractThe Steiner tree problem in graphs (SPG) is one of the most studied problems in combinatorial optimization. In the past 10 years, there have been significant advances concerning approximation and complexity of the SPG. However, the state of the art in (practical) exact solution of the SPG has remained largely unchallenged for almost 20 years. While the DIMACS Challenge 2014 and the PACE Challenge 2018 brought renewed interest into Steiner tree problems, even the best new SPG solvers cannot match the state of the art on the vast majority of benchmark instances. The following article seeks to advance exact SPG solution once again. The article is based on a combination of three concepts: Implications, conflicts, and reductions. As a result, various new SPG techniques are conceived. Notably, several of the resulting techniques are (provably) stronger than well-known methods from the literature that are used in exact SPG algorithms. Finally, by integrating the new methods into a branch-and-cut framework, we obtain an exact SPG solver that is not only competitive with, but even outperforms the current state of the art on an extensive collection of benchmark sets. Furthermore, we can solve several instances for the first time to optimality.


2008 ◽  
Vol 31 ◽  
pp. 217-257 ◽  
Author(s):  
M. H.L. Van den Briel ◽  
T. Vossen ◽  
S. Kambhampati

We represent planning as a set of loosely coupled network flow problems, where each network corresponds to one of the state variables in the planning domain. The network nodes correspond to the state variable values and the network arcs correspond to the value transitions. The planning problem is to find a path (a sequence of actions) in each network such that, when merged, they constitute a feasible plan. In this paper we present a number of integer programming formulations that model these loosely coupled networks with varying degrees of flexibility. Since merging may introduce exponentially many ordering constraints we implement a so-called branch-and-cut algorithm, in which these constraints are dynamically generated and added to the formulation when needed. Our results are very promising, they improve upon previous planning as integer programming approaches and lay the foundation for integer programming approaches for cost optimal planning.


Author(s):  
T. A. Welton

Various authors have emphasized the spatial information resident in an electron micrograph taken with adequately coherent radiation. In view of the completion of at least one such instrument, this opportunity is taken to summarize the state of the art of processing such micrographs. We use the usual symbols for the aberration coefficients, and supplement these with £ and 6 for the transverse coherence length and the fractional energy spread respectively. He also assume a weak, biologically interesting sample, with principal interest lying in the molecular skeleton remaining after obvious hydrogen loss and other radiation damage has occurred.


2003 ◽  
Vol 48 (6) ◽  
pp. 826-829 ◽  
Author(s):  
Eric Amsel
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document