scholarly journals The Complexity of Integer Bound Propagation

2011 ◽  
Vol 40 ◽  
pp. 657-676 ◽  
Author(s):  
L. Bordeaux ◽  
G. Katsirelos ◽  
N. Narodytska ◽  
M. Y. Vardi

Bound propagation is an important Artificial Intelligence technique used in Constraint Programming tools to deal with numerical constraints. It is typically embedded within a search procedure (”branch and prune”) and used at every node of the search tree to narrow down the search space, so it is critical that it be fast. The procedure invokes constraint propagators until a common fixpoint is reached, but the known algorithms for this have a pseudo-polynomial worst-case time complexity: they are fast indeed when the variables have a small numerical range, but they have the well-known problem of being prohibitively slow when these ranges are large. An important question is therefore whether strongly-polynomial algorithms exist that compute the common bound consistent fixpoint of a set of constraints. This paper answers this question. In particular we show that this fixpoint computation is in fact NP-complete, even when restricted to binary linear constraints.

Symmetry ◽  
2019 ◽  
Vol 11 (10) ◽  
pp. 1300
Author(s):  
Uroš Čibej ◽  
Luka Fürst ◽  
Jurij Mihelič

We introduce a new equivalence on graphs, defined by its symmetry-breaking capability. We first present a framework for various backtracking search algorithms, in which the equivalence is used to prune the search tree. Subsequently, we define the equivalence and an optimization problem with the goal of finding an equivalence partition with the highest pruning potential. We also position the optimization problem into the computational-complexity hierarchy. In particular, we show that the verifier lies between P and NP -complete problems. Striving for a practical usability of the approach, we devise a heuristic method for general graphs and optimal algorithms for trees and cycles.


2018 ◽  
Vol 27 (04) ◽  
pp. 1860002 ◽  
Author(s):  
Minas Dasygenis ◽  
Kostas Stergiou

Constraint programming (CP) is a powerful paradigm for various types of hard combinatorial problems. Constraint propagation techniques, such as arc consistency (AC), are used within solvers to prune inconsistent values from the domains of the variables and narrow down the search space. Local consistencies stronger than AC have the potential to prune the search space even more, but they are not widely used because they incur a high run time penalty in cases where they are unsuccessful. All constraint propagation techniques are sequential by nature, and thus they cannot be scaled up to modern multicore machines. For this reason, research on parallelizing constraint propagation is very limited. Contributing towards this direction, we exploit the parallelization possibilities of modern CPUs in tandem with strong local propagation methods in a novel way. Instead of trying to parallelize constraint propagation algorithms, we propose two search algorithms that apply different propagation methods in parallel. Both algorithms consist of a master search process, which is a typical CP solver, and a number of slave processes, with each one implementing a strong propagation method. The first algorithm runs the different propagators synchronously at each node of the search tree explored in the master process, while the second one can run them asynchronously at different nodes of the search tree. Preliminary experimental results on well-established benchmarks display the promise of our research by illustrating that our algorithms have execution times equal to those of serial solvers, in the worst case, while being faster in most cases.


2019 ◽  
Author(s):  
Thérèse E. Malliavin ◽  
Antonio Mucherino ◽  
Carlile Lavor ◽  
Leo Liberti

AbstractThe optimisation approaches classically used during the determination of protein structure encounter various diffculties, specially when the size of the conformational space is large. Indeed, in such case, algorithmic convergence criteria are more difficult to set up. Moreover, the size of the search space makes it difficult to achieve a complete exploration. The interval Branch-and-Prune (iBP) approach, based on the reformulating of the Distance Geometry Problem (DGP) provides a theoretical frame for the generation of protein conformations, by systematically sampling the conformational space. When an appropriate subset of inter-atomic distances is known exactly, this worst-case exponential-time algorithm is provably complete and fixed-parameter tractable. These guarantees, however, immediately disappear as distance measurement errors are introduced. Here we propose an improvement of this approach: the threading-augmented interval Branch-and-Prune (TAiBP), where the combinatorial explosion of the original iBP approach arising from its exponential complexity is alleviated by partitioning the input instances into consecutive peptide fragments and by using Self-Organizing Maps (SOMs) to obtain clusters of similar solutions. A validation of the TAiBP approach is presented here on a set of proteins of various sizes and structures. The calculation inputs are: a uniform covalent geometry extracted from force field covalent terms, the backbone dihedral angles with error intervals, and a few long-range distances. For most of the proteins smaller than 50 residues and interval widths of 20°, the TAiBP approach yielded solutions with RMSD values smaller than 3 Å with respect to the initial protein conformation. The efficiency of TAiBP approach for proteins larger than 50 residues will require the use of non-uniform covalent geometry, and may have benefits from the recent development of residue-specific force-fields.


2017 ◽  
Vol 27 (2) ◽  
pp. 273-290 ◽  
Author(s):  
Maciej Przybylski ◽  
Barbara Putz

AbstractSearching for the shortest-path in an unknown or changeable environment is a common problem in robotics and video games, in which agents need to update maps and to perform re-planning in order to complete their missions. D* Lite is a popular incremental heuristic search algorithm (i.e., it utilizes knowledge from previous searches). Its efficiency lies in the fact that it re-expands only those parts of the search-space that are relevant to registered changes and the current state of the agent. In this paper, we propose a new D* Extra Lite algorithm that is close to a regular A*, with reinitialization of the affected search-space achieved by search-tree branch cutting. The provided worst-case complexity analysis strongly suggests that D* Extra Lite’s method of reinitialization is faster than the focused approach to reinitialization used in D* Lite. In comprehensive tests on a large number of typical two-dimensional path-planning problems, D* Extra Lite was 1.08 to 1.94 times faster than the optimized version of D* Lite. Moreover, while demonstrating that it can be particularly suitable for difficult, dynamic problems, as the problem-complexity increased, D* Extra Lite’s performance further surpassed that of D*Lite. The source code of the algorithm is available on the open-source basis.


2021 ◽  
Vol 11 (8) ◽  
pp. 3627
Author(s):  
Michael B. Rahaim ◽  
Thomas D. C. Little ◽  
Mona Hella

To meet the growing demand for wireless capacity, communications in the Terahertz (THz) and optical bands are being broadly explored. Communications within these bands provide massive bandwidth potential along with highly directional beam steering capabilities. While the available bandwidth offers incredible link capacity, the directionality of these technologies offers an even more significant potential for spatial capacity or area spectral efficiency. However, this directionality also implies a challenge related to the network’s ability to quickly establish a connection. In this paper, we introduce a multi-tier heterogeneous (MTH) beamform management strategy that utilizes various wireless technologies in order to quickly acquire a highly directional indoor free space optical communication (FSO) link. The multi-tier design offers the high resolution of indoor FSO while the millimeter-wave (mmWave) system narrows the FSO search space. By narrowing the search space, the system relaxes the requirements of the FSO network in order to assure a practical search time. This paper introduces the necessary components of the proposed beam management strategy and provides a foundational analysis framework to demonstrate the relative impact of coverage, resolution, and steering velocity across tiers. Furthermore, an optimization analysis is used to define the top tier resolution that minimizes worst-case search time as a function of lower tier resolution and top tier range.


Author(s):  
Marlene Arangú ◽  
Miguel Salido

A fine-grained arc-consistency algorithm for non-normalized constraint satisfaction problems Constraint programming is a powerful software technology for solving numerous real-life problems. Many of these problems can be modeled as Constraint Satisfaction Problems (CSPs) and solved using constraint programming techniques. However, solving a CSP is NP-complete so filtering techniques to reduce the search space are still necessary. Arc-consistency algorithms are widely used to prune the search space. The concept of arc-consistency is bidirectional, i.e., it must be ensured in both directions of the constraint (direct and inverse constraints). Two of the most well-known and frequently used arc-consistency algorithms for filtering CSPs are AC3 and AC4. These algorithms repeatedly carry out revisions and require support checks for identifying and deleting all unsupported values from the domains. Nevertheless, many revisions are ineffective, i.e., they cannot delete any value and consume a lot of checks and time. In this paper, we present AC4-OP, an optimized version of AC4 that manages the binary and non-normalized constraints in only one direction, storing the inverse founded supports for their later evaluation. Thus, it reduces the propagation phase avoiding unnecessary or ineffective checking. The use of AC4-OP reduces the number of constraint checks by 50% while pruning the same search space as AC4. The evaluation section shows the improvement of AC4-OP over AC4, AC6 and AC7 in random and non-normalized instances.


2002 ◽  
Vol 12 (2) ◽  
Author(s):  
B.Ya. Ryabko ◽  
A.A. Fedotov

AbstractWe consider the problem on constructing a binary search tree for an arbitrary set of binary words, which has found a wide use in informatics, biology, mineralogy, and other fields. It is known that the problem on constructing the tree of minimal cost is NP-complete; hence the problem arises to find simple algorithms which allow us to construct trees close to the optimal ones. In this paper we demonstrate that even simplest algorithm yields search trees which are close to the optimal ones in average, and prove that the mean number of nodes checked in the optimal tree differs from the natural lower bound, the binary logarithm of the number of words, by no more than 1.04.


2011 ◽  
Vol 23 (4) ◽  
pp. 567-581 ◽  
Author(s):  
Evgeni Magid ◽  
◽  
Takashi Tsubouchi ◽  
Eiji Koyanagi ◽  
Tomoaki Yoshida ◽  
...  

Rescue robotics applies search and rescue robots to expand rescue capabilities while increasing safety. Mobile robots working at a disaster site are monitored remotely by operators who may not be able to see the site well and select work paths appropriately. Our goal is to provide a “pilot system” that can propose options for traversing 3D debris environments. This requires a special debris path search algorithm and an appropriately defined search tree ensuring smooth exploration. To make a path search feasible in huge real state space we discretize search space and robot movement before a search. In this paper we present path quality estimation and search tree branching functionF, which defines search tree building process online through node opening and branching. Well-defined functionFremoves unsuitable search directions from the search tree and enables dynamic path planning accounting for debris. Exhaustive simulation was used to structure and analyze data. Experiments confirmed the feasibility of our approach.


2008 ◽  
Vol 17 (03) ◽  
pp. 349-371 ◽  
Author(s):  
TAO HUANG ◽  
LEI LI ◽  
JUN WEI

With the increasing number of Web Services with similar or identical functionality, the non-functional properties of a Web Service will become more and more important. Hence, a choice needs to be made to determine which services are to participate in a given composite service. In general, multi-QoS constrained Web Services composition, with or without optimization, is a NP-complete problem on computational complexity that cannot be exactly solved in polynomial time. A lot of heuristics and approximation algorithms with polynomial- and pseudo-polynomial-time complexities have been designed to deal with this problem. However, these approaches suffer from excessive computational complexities that cannot be used for service composition in runtime. In this paper, we propose a efficient approach for multi-QoS constrained Web Services selection. Firstly, a user preference model was proposed to collect the user's preference. And then, a correlation model of candidate services are established in order to reduce the search space. Based on these two model, a heuristic algorithm is then proposed to find a feasible solution for multi-QoS constrained Web Services selection with high performance and high precision. The experimental results show that the proposed approach can achieve the expecting goal.


Sign in / Sign up

Export Citation Format

Share Document