scholarly journals Worst Case Bounds for Some NP-Complete Modified Horn-SAT Problems

Author(s):  
Stefan Porschen ◽  
Ewald Speckenmeyer
Keyword(s):  
2011 ◽  
Vol 21 (01) ◽  
pp. 87-100
Author(s):  
GREG ALOUPIS ◽  
PROSENJIT BOSE ◽  
ERIK D. DEMAINE ◽  
STEFAN LANGERMAN ◽  
HENK MEIJER ◽  
...  

Given a planar polygon (or chain) with a list of edges {e1, e2, e3, …, en-1, en}, we examine the effect of several operations that permute this edge list, resulting in the formation of a new polygon. The main operations that we consider are: reversals which involve inverting the order of a sublist, transpositions which involve interchanging subchains (sublists), and edge-swaps which are a special case and involve interchanging two consecutive edges. When each edge of the given polygon has also been assigned a direction we say that the polygon is signed. In this case any edge involved in a reversal changes direction. We show that a star-shaped polygon can be convexified using O(n2) edge-swaps, while maintaining simplicity, and that this is tight in the worst case. We show that determining whether a signed polygon P can be transformed to one that has rotational or mirror symmetry with P, using transpositions, takes Θ(n log n) time. We prove that the problem of deciding whether transpositions can modify a polygon to fit inside a rectangle is weakly NP-complete. Finally we give an O(n log n) time algorithm to compute the maximum endpoint distance for an oriented chain.


Author(s):  
Wojciech Jamroga ◽  
Michał Knapik

Model checking strategic abilities in multi-agent systems is hard, especially for agents with partial observability of the state of the system. In that case, it ranges from NP-complete to undecidable, depending on the precise syntax and the semantic variant. That, however, is the worst case complexity, and the problem might as well be easier when restricted to particular subclasses of inputs. In this paper, we look at the verification of models with "extreme" epistemic structure, and identify several special cases for which model checking is easier than in general. We also prove that, in the other cases, no gain is possible even if the agents have almost full (or almost nil) observability. To prove the latter kind of results, we develop generic techniques that may be useful also outside of this study.


2018 ◽  
Vol 28 (03) ◽  
pp. 289-307 ◽  
Author(s):  
Sándor P. Fekete ◽  
Phillip Keldenich

A conflict-free[Formula: see text]-coloring of a graph [Formula: see text] assigns one of [Formula: see text] different colors to some of the vertices such that, for every vertex [Formula: see text], there is a color that is assigned to exactly one vertex among [Formula: see text] and [Formula: see text]’s neighbors. Such colorings have applications in wireless networking, robotics, and geometry, and are well studied in graph theory. Here we study the conflict-free coloring of geometric intersection graphs. We demonstrate that the intersection graph of [Formula: see text] geometric objects without fatness properties and size restrictions may have conflict-free chromatic number in [Formula: see text] and in [Formula: see text] for disks or squares of different sizes; it is known for general graphs that the worst case is in [Formula: see text]. For unit-disk intersection graphs, we prove that it is NP-complete to decide the existence of a conflict-free coloring with one color; we also show that six colors always suffice, using an algorithm that colors unit disk graphs of restricted height with two colors. We conjecture that four colors are sufficient, which we prove for unit squares instead of unit disks. For interval graphs, we establish a tight worst-case bound of two.


Author(s):  
Manuel Heusner ◽  
Thomas Keller ◽  
Malte Helmert

We study the impact of tie-breaking on the behavior of greedy best-first search with a fixed state space and fixed heuristic. We prove that it is NP-complete to determine the number of states that need to be expanded by greedy best-first search in the best case or in the worst case. However, the best- and worst-case behavior can be computed in polynomial time for undirected state spaces. We perform computational experiments on benchmark tasks from the International Planning Competitions that compare the best and worst cases of greedy best-first search to FIFO, LIFO and random tie-breaking. The experiments demonstrate the importance of tie-breaking in greedy best-first search.


2019 ◽  
Vol 19 (04) ◽  
pp. 1950006
Author(s):  
PUSPAL BHABAK ◽  
HOVHANNES A. HARUTYUNYAN

Broadcasting is an information dissemination problem in a connected network in which one node, called the originator, must distribute a message to all other nodes by placing a series of calls along the communication lines of the network. In every unit of time, the informed nodes aid the originator in distributing the message. Finding the broadcast time of any vertex in an arbitrary graph is NP-complete. The polynomial time solvability is shown only for certain graphs like trees, unicyclic graphs, tree of cycles, necklace graphs, fully connected trees and tree of cliques. In this paper we study the broadcast problem in k-path graphs. For any originator of the k-path graph we present a (4 – ϵ)-approximation algorithm in the worst case. The algorithm gives a better approximation ratio for some large classes of k-path graphs. Moreover, our algorithm generates the optimal broadcast time for some cases.


Author(s):  
Simona Cocco ◽  
Rémi Monasson

The computational effort needed to deal with large combinatorial structures varies considerably with the task to be performed and the resolution procedure used [425]. The worst-case complexity of a decision or optimization problem is defined as the time required by the best algorithm to treat any possible input to the problem. For instance, the worst-case complexity of the problem of sorting a list of n numbers scales as n log n: there exist several algorithms that can order any list in at most ~ n log n elementary operations, and none with asymptotically fewer operations. Unfortunately, the worst-case complexities of many important computational problems, called NP-complete, are not known. Partitioning a list of n numbers in two sets with equal partial sums is one among hundreds of known NP-complete problems. It is a fundamental conjecture of theoretical computer science that there exists no algorithm capable of partitioning any list of length n, or of solving any other NP-complete problem with inputs of size n, in a time bounded by a polynomial of n. Therefore, when trying to solve such a problem exactly, one necessarily uses algorithms that may take exponential time on some inputs. Quantifying how“frequent” these hard inputs are for a given algorithm is the question answered by the analysis of algorithms. We will present an overview of recent work by physicists to address this point, and more precisely to characterize the average performance—hereafter simply called complexity—of a given algorithm over a distribution of inputs to a computational problem. The history of algorithm analysis by physical methods and ideas is at least as old as the use of computers by physicists. One well-established chapter in this history is the analysis of Monte Carlo sampling algorithms for statistical mechanics models. It is well known that phase transitions, that is, abrupt changes in the physical properties of the model, can imply a dramatic increase in the time necessary for the sampling procedure. This phenomenon is commonly known as critical slowing down. The physicist's insight comes from the analogy between the dynamics of algorithms and the physical dynamics of the system. That analogy is quite natural: in fact many algorithms mimic the physical dynamics.


2021 ◽  
Vol 13 (1) ◽  
pp. 1-32
Author(s):  
Peter Jonsson ◽  
Victor Lagerkvist ◽  
Biman Roy

We study the constraint satisfaction problem (CSP) parameterized by a constraint language Γ (CSPΓ) and how the choice of Γ affects its worst-case time complexity. Under the exponential-time hypothesis (ETH), we rule out the existence of subexponential algorithms for finite-domain NP-complete CSPΓ problems. This extends to certain infinite-domain CSPs and structurally restricted problems. For CSPs with finite domain D and where all unary relations are available, we identify a relation S D such that the time complexity of the NP-complete problem CSP({ S D }) is a lower bound for all NP-complete CSPs of this kind. We also prove that the time complexity of CSP({ S D }) strictly decreases when |D| increases (unless the ETH is false) and provide stronger complexity results in the special case when |D|=3.


2011 ◽  
Vol 11 (7&8) ◽  
pp. 638-648
Author(s):  
Vicky Choi

One of the most important questions in studying quantum computation is: whether a quantum computer can solve NP-complete problems more efficiently than a classical computer? In 2000, Farhi, et al. (Science, 292(5516):472--476, 2001) proposed the adiabatic quantum optimization (AQO), a paradigm that directly attacks NP-hard optimization problems. How powerful is AQO? Early on, van-Dam and Vazirani claimed that AQO failed (i.e. would take exponential time) for a family of 3SAT instances they constructed. More recently, Altshuler, et al. (Proc Natl Acad Sci USA, 107(28): 12446--12450, 2010) claimed that AQO failed also for random instances of the NP-complete Exact Cover problem. In this paper, we make clear that all these negative results are only for a specific AQO algorithm. We do so by demonstrating different AQO algorithms for the same problem for which their arguments no longer hold. Whether AQO fails or succeeds for solving the NP-complete problems (either the worst case or the average case) requires further investigation. Our AQO algorithms for Exact Cover and 3SAT are based on the polynomial reductions to the NP-complete Maximum-weight Independent Set (MIS) problem.


2018 ◽  
Vol 28 (02) ◽  
pp. 161-180
Author(s):  
Hugo A. Akitaya ◽  
Csaba D. Tóth

We address the problem of reconstructing a polygon from the multiset of its edges. Given [Formula: see text] line segments in the plane, find a polygon with [Formula: see text] vertices whose edges are these segments, or report that none exists. It is easy to solve the problem in [Formula: see text] time if we seek an arbitrary polygon or a simple polygon. We show that the problem is NP-complete for weakly simple polygons, that is, a polygon whose vertices can be perturbed by at most [Formula: see text], for any [Formula: see text], to obtain a simple polygon. We give [Formula: see text]-time algorithms for reconstructing weakly simple polygons: when all segments are collinear or the segment endpoints are in general position. These results extend to the variant in which the segments are directed. We study related problems for the case that the union of the [Formula: see text] input segments is connected. (i) If each segment can be subdivided into several segments, find the minimum number of subdivision points to form a weakly simple polygon. (ii) If new line segments can be added, find the minimum total length of new segments that creates a weakly simple polygon. We give worst-case upper and lower bounds for both problems.


2011 ◽  
Vol 40 ◽  
pp. 657-676 ◽  
Author(s):  
L. Bordeaux ◽  
G. Katsirelos ◽  
N. Narodytska ◽  
M. Y. Vardi

Bound propagation is an important Artificial Intelligence technique used in Constraint Programming tools to deal with numerical constraints. It is typically embedded within a search procedure (”branch and prune”) and used at every node of the search tree to narrow down the search space, so it is critical that it be fast. The procedure invokes constraint propagators until a common fixpoint is reached, but the known algorithms for this have a pseudo-polynomial worst-case time complexity: they are fast indeed when the variables have a small numerical range, but they have the well-known problem of being prohibitively slow when these ranges are large. An important question is therefore whether strongly-polynomial algorithms exist that compute the common bound consistent fixpoint of a set of constraints. This paper answers this question. In particular we show that this fixpoint computation is in fact NP-complete, even when restricted to binary linear constraints.


Sign in / Sign up

Export Citation Format

Share Document