On the Complexity of Deadlock Recovery

1986 ◽  
Vol 9 (3) ◽  
pp. 323-342
Author(s):  
Joseph Y.-T. Leung ◽  
Burkhard Monien

We consider the computational complexity of finding an optimal deadlock recovery. It is known that for an arbitrary number of resource types the problem is NP-hard even when the total cost of deadlocked jobs and the total number of resource units are “small” relative to the number of deadlocked jobs. It is also known that for one resource type the problem is NP-hard when the total cost of deadlocked jobs and the total number of resource units are “large” relative to the number of deadlocked jobs. In this paper we show that for one resource type the problem is solvable in polynomial time when the total cost of deadlocked jobs or the total number of resource units is “small” relative to the number of deadlocked jobs. For fixed m ⩾ 2 resource types, we show that the problem is solvable in polynomial time when the total number of resource units is “small” relative to the number of deadlocked jobs. On the other hand, when the total number of resource units is “large”, the problem becomes NP-hard even when the total cost of deadlocked jobs is “small” relative to the number of deadlocked jobs. The results in the paper, together with previous known ones, give a complete delineation of the complexity of this problem under various assumptions of the input parameters.

Axioms ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 304
Author(s):  
Florin Manea

In this paper we propose and analyse from the computational complexity point of view several new variants of nondeterministic Turing machines. In the first such variant, a machine accepts a given input word if and only if one of its shortest possible computations on that word is accepting; on the other hand, the machine rejects the input word when all the shortest computations performed by the machine on that word are rejecting. We are able to show that the class of languages decided in polynomial time by such machines is PNP[log]. When we consider machines that decide a word according to the decision taken by the lexicographically first shortest computation, we obtain a new characterization of PNP. A series of other ways of deciding a language with respect to the shortest computations of a Turing machine are also discussed.


2018 ◽  
Vol 29 (05) ◽  
pp. 893-909
Author(s):  
Florin Manea ◽  
Dirk Nowotka ◽  
Markus L. Schmid

We investigate the complexity of the solvability problem for restricted classes of word equations with and without regular constraints. The solvability problem for unrestricted word equations remains [Formula: see text]-hard, even if, on both sides, between any two occurrences of the same variable no other different variable occurs; for word equations with regular constraints, the solvability problems remains [Formula: see text]-hard for equations whose two sides share no variables or with two variables, only one of which is repeated. On the other hand, word equations with only one repeated variable (but an arbitrary number of variables) and at least one non-repeated variable on each side, can be solved in polynomial-time.


2018 ◽  
Vol 61 (2) ◽  
pp. 252-271 ◽  
Author(s):  
Megan Dewar ◽  
David Pike ◽  
John Proos

AbstractIn this paper we consider two natural notions of connectivity for hypergraphs: weak and strong. We prove that the strong vertex connectivity of a connected hypergraph is bounded by its weak edge connectivity, thereby extending a theorem of Whitney from graphs to hypergraphs. We find that, while determining a minimum weak vertex cut can be done in polynomial time and is equivalent to finding a minimum vertex cut in the 2-section of the hypergraph in question, determining a minimum strong vertex cut is NP-hard for general hypergraphs. Moreover, the problem of finding minimum strong vertex cuts remains NP-hard when restricted to hypergraphs with maximum edge size at most 3. We also discuss the relationship between strong vertex connectivity and the minimum transversal problem for hypergraphs, showing that there are classes of hypergraphs for which one of the problems is NP-hard, while the other can be solved in polynomial time.


Author(s):  
Petr Savický ◽  
Petr Kučera

A matched formula is a CNF formula whose incidence graph admits a matching which matches a distinct variable to every clause. Such a formula is always satisfiable. Matched formulas are used, for example, in the area of parameterized complexity. We prove that the problem of counting the number of the models (satisfying assignments) of a matched formula is #P-complete. On the other hand, we define a class of formulas generalizing the matched formulas and prove that for a formula in this class one can choose in polynomial time a variable suitable for splitting the tree for the search of the models of the formula. As a consequence, the models of a formula from this class, in particular of any matched formula, can be generated sequentially with a delay polynomial in the size of the input. On the other hand, we prove that this task cannot be performed efficiently for linearly satisfiable formulas, which is a generalization of matched formulas containing the class considered above.


2021 ◽  
Vol 71 ◽  
pp. 993-1048
Author(s):  
Niclas Boehmer ◽  
Robert Bredereck ◽  
Klaus Heeger ◽  
Rolf Niedermeier

We initiate the study of external manipulations in Stable Marriage by considering  several manipulative actions as well as several manipulation goals. For instance, one goal  is to make sure that a given pair of agents is matched in a stable solution, and this may be  achieved by the manipulative action of reordering some agents' preference lists. We present  a comprehensive study of the computational complexity of all problems arising in this way.  We find several polynomial-time solvable cases as well as NP-hard ones. For the NP-hard  cases, focusing on the natural parameter "budget" (that is, the number of manipulative  actions one is allowed to perform), we also conduct a parameterized complexity analysis  and encounter mostly parameterized hardness results. 


Author(s):  
Michael Bernreiter ◽  
Jan Maly ◽  
Stefan Woltran

Qualitative Choice Logic (QCL) and Conjunctive Choice Logic (CCL) are formalisms for preference handling, with especially QCL being well established in the field of AI. So far, analyses of these logics need to be done on a case-by-case basis, albeit they share several common features. This calls for a more general choice logic framework, with QCL and CCL as well as some of their derivatives being particular instantiations. We provide such a framework, which allows us, on the one hand, to easily define new choice logics and, on the other hand, to examine properties of different choice logics in a uniform setting. In particular, we investigate strong equivalence, a core concept in non-classical logics for understanding formula simplification, and computational complexity. Our analysis also yields new results for QCL and CCL. For example, we show that the main reasoning task regarding preferred models is ϴ₂P-complete for QCL and CCL, while being Δ₂P-complete for a newly introduced choice logic.


Author(s):  
Naser T Sardari

Abstract By assuming some widely believed arithmetic conjectures, we show that the task of accepting a number that is representable as a sum of $d\geq 2$ squares subjected to given congruence conditions is NP-complete. On the other hand, we develop and implement a deterministic polynomial-time algorithm that represents a number as a sum of four squares with some restricted congruence conditions, by assuming a polynomial-time algorithm for factoring integers and Conjecture 1.1. As an application, we develop and implement a deterministic polynomial-time algorithm for navigating Lubotzky, Phillips, Sarnak (LPS) Ramanujan graphs, under the same assumptions.


2020 ◽  
Vol 20 (1&2) ◽  
pp. 65-84
Author(s):  
Xuexuan Hao ◽  
Fengrong Zhang ◽  
Yongzhuang Wei ◽  
Yong Zhou

Quantum period finding algorithms have been used to analyze symmetric cryptography. For instance, the 3-round Feistel construction and the Even-Mansour construction could be broken in polynomial time by using quantum period finding algorithms. In this paper, we firstly provide a new algorithm for finding the nonzero period of a vectorial function with O(n) quantum queries, which uses the Bernstein-Vazirani algorithm as one step of the subroutine. Afterwards, we compare our algorithm with Simon's algorithm. In some scenarios, such as the Even-Mansour construction and the function satisfying Simon's promise, etc, our algorithm is more efficient than Simon's algorithm with respect to the tradeoff between quantum memory and time. On the other hand, we combine our algorithm with Grover's algorithm for the key-recovery attack on the FX construction. Compared with the Grover-Meets-Simon algorithm proposed by Leander and May at Asiacrypt 2017, the new algorithm could save the quantum memory.


Phonology ◽  
2017 ◽  
Vol 34 (2) ◽  
pp. 385-405 ◽  
Author(s):  
Thomas Graf

Domains play an integral role in linguistic theories. This paper combines locality domains with current models of the computational complexity of phonology. The first result is that if a specific formalism – strictly piecewise grammars – is supplemented with a mechanism to enforce first-order definable domain restrictions, its power increases so much that it subsumes almost the full hierarchy of subregular languages. However, if domain restrictions are based on linguistically natural intervals, we instead obtain an empirically more adequate model. On the one hand, this model subsumes only those subregular classes that have been argued to be relevant for phonotactic generalisations. On the other hand, it excludes unnatural generalisations that involve counting or elaborate conditionals. It is also shown that strictly piecewise grammars with interval-based domains are theoretically learnable, unlike those with arbitrary, first-order domains.


2014 ◽  
Vol 24 (03) ◽  
pp. 225-236 ◽  
Author(s):  
DAVID KIRKPATRICK ◽  
BOTING YANG ◽  
SANDRA ZILLES

Given an arrangement A of n sensors and two points s and t in the plane, the barrier resilience of A with respect to s and t is the minimum number of sensors whose removal permits a path from s to t such that the path does not intersect the coverage region of any sensor in A. When the surveillance domain is the entire plane and sensor coverage regions are unit line segments, even with restricted orientations, the problem of determining the barrier resilience is known to be NP-hard. On the other hand, if sensor coverage regions are arbitrary lines, the problem has a trivial linear time solution. In this paper, we study the case where each sensor coverage region is an arbitrary ray, and give an O(n2m) time algorithm for computing the barrier resilience when there are m ⩾ 1 sensor intersections.


Sign in / Sign up

Export Citation Format

Share Document