COMPUTATIONAL COMPLEXITY OF VARIOUS MAL'CEV CONDITIONS

2013 ◽  
Vol 23 (06) ◽  
pp. 1521-1531 ◽  
Author(s):  
JONAH HOROWITZ

This paper examines the computational complexity of determining whether or not an algebra satisfies a certain Mal'Cev condition. First, we define a class of Mal'Cev conditions whose satisfaction can be determined in polynomial time (special cube term satisfying the DCP) when the algebra in question is idempotent and provide an algorithm through which this determination may be made. The aforementioned class notably includes near unanimity terms and edge terms of fixed arity. Second, we define a different class of Mal'Cev conditions whose satisfaction, in general, requires exponential time to determine (Mal'Cev conditions satisfiable by CPB0 operations).

1986 ◽  
Vol 9 (3) ◽  
pp. 323-342
Author(s):  
Joseph Y.-T. Leung ◽  
Burkhard Monien

We consider the computational complexity of finding an optimal deadlock recovery. It is known that for an arbitrary number of resource types the problem is NP-hard even when the total cost of deadlocked jobs and the total number of resource units are “small” relative to the number of deadlocked jobs. It is also known that for one resource type the problem is NP-hard when the total cost of deadlocked jobs and the total number of resource units are “large” relative to the number of deadlocked jobs. In this paper we show that for one resource type the problem is solvable in polynomial time when the total cost of deadlocked jobs or the total number of resource units is “small” relative to the number of deadlocked jobs. For fixed m ⩾ 2 resource types, we show that the problem is solvable in polynomial time when the total number of resource units is “small” relative to the number of deadlocked jobs. On the other hand, when the total number of resource units is “large”, the problem becomes NP-hard even when the total cost of deadlocked jobs is “small” relative to the number of deadlocked jobs. The results in the paper, together with previous known ones, give a complete delineation of the complexity of this problem under various assumptions of the input parameters.


Quantum ◽  
2020 ◽  
Vol 4 ◽  
pp. 329
Author(s):  
Tomoyuki Morimae ◽  
Suguru Tamaki

It is known that several sub-universal quantum computing models, such as the IQP model, the Boson sampling model, the one-clean qubit model, and the random circuit model, cannot be classically simulated in polynomial time under certain conjectures in classical complexity theory. Recently, these results have been improved to ``fine-grained" versions where even exponential-time classical simulations are excluded assuming certain classical fine-grained complexity conjectures. All these fine-grained results are, however, about the hardness of strong simulations or multiplicative-error sampling. It was open whether any fine-grained quantum supremacy result can be shown for a more realistic setup, namely, additive-error sampling. In this paper, we show the additive-error fine-grained quantum supremacy (under certain complexity assumptions). As examples, we consider the IQP model, a mixture of the IQP model and log-depth Boolean circuits, and Clifford+T circuits. Similar results should hold for other sub-universal models.


1999 ◽  
Vol 09 (01) ◽  
pp. 113-128 ◽  
Author(s):  
CLIFFORD BERGMAN ◽  
DAVID JUEDES ◽  
GIORA SLUTZKI

Two algebraic structures with the same universe are called term-equivalent if they have the same clone of term operations. We show that the problem of determining whether two finite algebras of finite similarity type are term-equivalent is complete for deterministic exponential time.


2007 ◽  
Vol 18 (04) ◽  
pp. 715-725
Author(s):  
CÉDRIC BASTIEN ◽  
JUREK CZYZOWICZ ◽  
WOJCIECH FRACZAK ◽  
WOJCIECH RYTTER

Simple grammar reduction is an important component in the implementation of Concatenation State Machines (a hardware version of stateless push-down automata designed for wire-speed network packet classification). We present a comparison and experimental analysis of the best-known algorithms for grammar reduction. There are two approaches to this problem: one processing compressed strings without decompression and another one which processes strings explicitly. It turns out that the second approach is more efficient in the considered practical scenario despite having worst-case exponential time complexity (while the first one is polynomial). The study has been conducted in the context of network packet classification, where simple grammars are used for representing the classification policies.


Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-21 ◽  
Author(s):  
Linqiang Pan ◽  
Bosheng Song ◽  
Luis Valencia-Cabrera ◽  
Mario J. Pérez-Jiménez

Tissue P systems with evolutional communication (symport/antiport) rules are computational models inspired by biochemical systems consisting of multiple individuals living and cooperating in a certain environment, where objects can be modified when moving from one region to another region. In this work, cell separation, inspired from membrane fission process, is introduced in the framework of tissue P systems with evolutional communication rules. The computational complexity of this kind of P systems is investigated. It is proved that only problems in class P can be efficiently solved by tissue P systems with cell separation with evolutional communication rules of length at most (n,1), for each natural number n≥1. In the case where that length is upper bounded by (3,2), a polynomial time solution to the SAT problem is provided, hence, assuming that P≠NP a new boundary between tractability and NP-hardness on the basis of the length of evolutional communication rules is provided. Finally, a new simulator for tissue P systems with evolutional communication rules is designed and is used to check the correctness of the solution to the SAT problem.


2011 ◽  
Vol 22 (02) ◽  
pp. 395-409 ◽  
Author(s):  
HOLGER PETERSEN

We investigate the efficiency of simulations of storages by several counters. A simulation of a pushdown store is described which is optimal in the sense that reducing the number of counters of a simulator leads to an increase in time complexity. The lower bound also establishes a tight counter hierarchy in exponential time. Then we turn to simulations of a set of counters by a different number of counters. We improve and generalize a known simulation in polynomial time. Greibach has shown that adding s + 1 counters increases the power of machines working in time ns. Using a new family of languages we show here a tight hierarchy result for machines with the same polynomial time-bound. We also prove hierarchies for machines with a fixed number of counters and with growing polynomial time-bounds. For machines with one counter and an additional "store zero" instruction we establish the equivalence of real-time and linear time. If at least two counters are available, the classes of languages accepted in real-time and linear time can be separated.


1997 ◽  
Vol 08 (03) ◽  
pp. 237-252 ◽  
Author(s):  
H. K. Dai

Concentrators and generalized-concentrators are interconnection networks that provide respectively pairwise vertex-disjoint directed paths and trees to satisfy interconnection requests. An interconnection network is non-blocking in the strict sense if every compatible interconnection request can be satisfied by a path regardless of any existing interconnections. We present polynomial time computational complexity results for deciding the strictly non-blocking concentration and generalized-concentration properties with small depth, by using b-matching techniques.


1976 ◽  
Vol 5 (68) ◽  
Author(s):  
Neil D. Jones ◽  
Steven S. Muchnick

<p>In an earlier paper (JACM, 1976) we studied the computational complexity of a number of questions of both programming and theoretical interest (e.g. halting, looping, equivalence) concerning the behaviour of programs written in an extremely simple programming language. These finite memory programs or fmps model the behaviour of FORTRAN-like programs with a finite memory whose size can be determined by examination of the program itself.</p><p>The present paper is a continuation in which we extend the analysis to include ALGOL-like programs (called fmp^(rec) s) with the finite memory augmented by an implicit pushdown stack used to support recursion.</p><p>Our major results are the following. First, we show that at least deterministic exponential time is required to determine whether a program in the basic fmpr~C model accepts a nonempty set. Then we show that a model with a limited version of call-by-name requires exponential space to determine acceptance of a nonempty set, and that a more sophisticated model with rewritable conditional formal parametershas an undecidable halting problem. The same lower bounds apply to the equivalence problem, which in contrast to the situation for the basic fmp model is not known to be decidable (since it is not known whether equivalence of deterministic pushdown automata is decidable).</p>


2003 ◽  
Vol 10 (17) ◽  
Author(s):  
Luca Aceto ◽  
Jens Alsted Hansen ◽  
Anna Ingólfsdóttir ◽  
Jacob Johnsen ◽  
John Knudsen

Consistency checking is a fundamental computational problem in genetics. Given a pedigree and information on the genotypes (of some) of the individuals in it, the aim of consistency checking is to determine whether these data are consistent with the classic Mendelian laws of inheritance. This problem arose originally from the geneticists' need to filter their input data from erroneous information, and is well motivated from both a biological and a sociological viewpoint. This paper shows that consistency checking is NP-complete, even with focus on a single gene and in the presence of three alleles. Several other results on the computational complexity of problems from genetics that are related to consistency checking are also offered. In particular, it is shown that checking the consistency of pedigrees over two alleles, and of pedigrees without loops, can be done in polynomial time.


2022 ◽  
Vol 23 (1) ◽  
pp. 1-35
Author(s):  
Manuel Bodirsky ◽  
Marcello Mamino ◽  
Caterina Viola

Valued constraint satisfaction problems (VCSPs) are a large class of combinatorial optimisation problems. The computational complexity of VCSPs depends on the set of allowed cost functions in the input. Recently, the computational complexity of all VCSPs for finite sets of cost functions over finite domains has been classified. Many natural optimisation problems, however, cannot be formulated as VCSPs over a finite domain. We initiate the systematic investigation of the complexity of infinite-domain VCSPs with piecewise linear homogeneous cost functions. Such VCSPs can be solved in polynomial time if the cost functions are improved by fully symmetric fractional operations of all arities. We show this by reducing the problem to a finite-domain VCSP which can be solved using the basic linear program relaxation. It follows that VCSPs for submodular PLH cost functions can be solved in polynomial time; in fact, we show that submodular PLH functions form a maximally tractable class of PLH cost functions.


Sign in / Sign up

Export Citation Format

Share Document