The Exponential-Time Complexity of Counting (Quantum) Graph Homomorphisms

Author(s):  
Hubie Chen ◽  
Radu Curticapean ◽  
Holger Dell
2007 ◽  
Vol 18 (04) ◽  
pp. 715-725
Author(s):  
CÉDRIC BASTIEN ◽  
JUREK CZYZOWICZ ◽  
WOJCIECH FRACZAK ◽  
WOJCIECH RYTTER

Simple grammar reduction is an important component in the implementation of Concatenation State Machines (a hardware version of stateless push-down automata designed for wire-speed network packet classification). We present a comparison and experimental analysis of the best-known algorithms for grammar reduction. There are two approaches to this problem: one processing compressed strings without decompression and another one which processes strings explicitly. It turns out that the second approach is more efficient in the considered practical scenario despite having worst-case exponential time complexity (while the first one is polynomial). The study has been conducted in the context of network packet classification, where simple grammars are used for representing the classification policies.


2011 ◽  
Vol 22 (02) ◽  
pp. 395-409 ◽  
Author(s):  
HOLGER PETERSEN

We investigate the efficiency of simulations of storages by several counters. A simulation of a pushdown store is described which is optimal in the sense that reducing the number of counters of a simulator leads to an increase in time complexity. The lower bound also establishes a tight counter hierarchy in exponential time. Then we turn to simulations of a set of counters by a different number of counters. We improve and generalize a known simulation in polynomial time. Greibach has shown that adding s + 1 counters increases the power of machines working in time ns. Using a new family of languages we show here a tight hierarchy result for machines with the same polynomial time-bound. We also prove hierarchies for machines with a fixed number of counters and with growing polynomial time-bounds. For machines with one counter and an additional "store zero" instruction we establish the equivalence of real-time and linear time. If at least two counters are available, the classes of languages accepted in real-time and linear time can be separated.


10.37236/9216 ◽  
2021 ◽  
Vol 28 (3) ◽  
Author(s):  
Markus Hunziker ◽  
John A. Miller ◽  
Mark Sepanski

By the Pieri rule, the tensor product of an exterior power and a finite-dimensional irreducible representation of a general linear group has a multiplicity-free decomposition. The embeddings of the constituents  are called Pieri inclusions and were first studied by Weyman in his thesis and described explicitly by Olver. More recently, these maps have appeared in the work of Eisenbud, Fløystad, and Weyman and of Sam and Weyman to compute pure free resolutions for classical groups. In this paper, we give a new closed form, non-recursive description of Pieri inclusions. For partitions with a bounded number of distinct parts, the resulting algorithm has polynomial time complexity whereas the previously known algorithm has exponential time complexity.


2014 ◽  
Vol 10 (4) ◽  
pp. 1-32 ◽  
Author(s):  
Holger Dell ◽  
Thore Husfeldt ◽  
Dániel Marx ◽  
Nina Taslaman ◽  
Martin Wahlén

2016 ◽  
Vol 497 ◽  
pp. 23-43 ◽  
Author(s):  
Carlos M. Ortiz ◽  
Vern I. Paulsen

Author(s):  
Konrad K. Dabrowski ◽  
Peter Jonsson ◽  
Sebastian Ordyniak ◽  
George Osipov

Expressive temporal reasoning formalisms are essential for AI. One family of such formalisms consists of disjunctive extensions of the simple temporal problem (STP). Such extensions are well studied in the literature and they have many important applications. It is known that deciding satisfiability of disjunctive STPs is NP-hard, while the fine-grained complexity of such problems is virtually unexplored. We present novel algorithms that exploit structural properties of the solution space and prove, assuming the Exponential-Time Hypothesis, that their worst-case time complexity is close to optimal. Among other things, we make progress towards resolving a long-open question concerning whether Allen's interval algebra can be solved in single-exponential time, by giving a 2^{O(nloglog(n))} algorithm for the special case of unit-length intervals.


Author(s):  
Pierre-Loïc Garoche

This chapter aims at providing the intuition behind convex optimization algorithms and addresses their effective use with floating-point implementation. It first briefly presents the algorithms, assuming a real semantics. As outlined in Chapter 4, convex conic programming is supported by different methods depending on the cone considered. The most known approach for linear constraints is the simplex method by Dantzig. While having an exponential-time complexity with respect to the number of constraints, the simplex method performs well in general. Another method is the set of interior point methods, initially proposed by Karmarkar and made popular by Nesterov and Nemirovski. They can be characterized as path-following methods in which a sequence of local linear problems are solved, typically by Newton's method. After these algorithms are considered, the chapter discusses approaches to obtain sound results.


1980 ◽  
Vol 9 (107) ◽  
Author(s):  
Neil D. Jones

Jazayeri, Ogden and Rounds have shown that the high time complexity of Knuth's algorithm for testing attribute grammars for circularity is no accident. It was proved that there is a constant c>0 such that any deterministic Turing Machine which correctly tests for circularity must run for more than 2 ^(cn/log n) steps on infinitely many attribute grammars (AGs) (the size of an AG is the number of symbols required to write it down). The proof was rather complex; the purpose of this note is to provide a simpler one.


Sign in / Sign up

Export Citation Format

Share Document