scholarly journals The Relative Exponential Time Complexity of Approximate Counting Satisfying Assignments

Algorithmica ◽  
2016 ◽  
Vol 75 (2) ◽  
pp. 339-362
Author(s):  
Patrick Traxler
2007 ◽  
Vol 18 (04) ◽  
pp. 715-725
Author(s):  
CÉDRIC BASTIEN ◽  
JUREK CZYZOWICZ ◽  
WOJCIECH FRACZAK ◽  
WOJCIECH RYTTER

Simple grammar reduction is an important component in the implementation of Concatenation State Machines (a hardware version of stateless push-down automata designed for wire-speed network packet classification). We present a comparison and experimental analysis of the best-known algorithms for grammar reduction. There are two approaches to this problem: one processing compressed strings without decompression and another one which processes strings explicitly. It turns out that the second approach is more efficient in the considered practical scenario despite having worst-case exponential time complexity (while the first one is polynomial). The study has been conducted in the context of network packet classification, where simple grammars are used for representing the classification policies.


2011 ◽  
Vol 22 (02) ◽  
pp. 395-409 ◽  
Author(s):  
HOLGER PETERSEN

We investigate the efficiency of simulations of storages by several counters. A simulation of a pushdown store is described which is optimal in the sense that reducing the number of counters of a simulator leads to an increase in time complexity. The lower bound also establishes a tight counter hierarchy in exponential time. Then we turn to simulations of a set of counters by a different number of counters. We improve and generalize a known simulation in polynomial time. Greibach has shown that adding s + 1 counters increases the power of machines working in time ns. Using a new family of languages we show here a tight hierarchy result for machines with the same polynomial time-bound. We also prove hierarchies for machines with a fixed number of counters and with growing polynomial time-bounds. For machines with one counter and an additional "store zero" instruction we establish the equivalence of real-time and linear time. If at least two counters are available, the classes of languages accepted in real-time and linear time can be separated.


10.37236/9216 ◽  
2021 ◽  
Vol 28 (3) ◽  
Author(s):  
Markus Hunziker ◽  
John A. Miller ◽  
Mark Sepanski

By the Pieri rule, the tensor product of an exterior power and a finite-dimensional irreducible representation of a general linear group has a multiplicity-free decomposition. The embeddings of the constituents  are called Pieri inclusions and were first studied by Weyman in his thesis and described explicitly by Olver. More recently, these maps have appeared in the work of Eisenbud, Fløystad, and Weyman and of Sam and Weyman to compute pure free resolutions for classical groups. In this paper, we give a new closed form, non-recursive description of Pieri inclusions. For partitions with a bounded number of distinct parts, the resulting algorithm has polynomial time complexity whereas the previously known algorithm has exponential time complexity.


2021 ◽  
Vol 13 (2) ◽  
pp. 1-24
Author(s):  
Holger Dell ◽  
John Lapinskas

In this article, we introduce a general framework for fine-grained reductions of approximate counting problems to their decision versions. (Thus, we use an oracle that decides whether any witness exists to multiplicatively approximate the number of witnesses with minimal overhead.) This mirrors a foundational result of Sipser (STOC 1983) and Stockmeyer (SICOMP 1985) in the polynomial-time setting, and a similar result of Müller (IWPEC 2006) in the FPT setting. Using our framework, we obtain such reductions for some of the most important problems in fine-grained complexity: the Orthogonal Vectors problem, 3SUM, and the Negative-Weight Triangle problem (which is closely related to All-Pairs Shortest Path). While all these problems have simple algorithms over which it is conjectured that no polynomial improvement is possible, our reductions would remain interesting even if these conjectures were proved; they have only polylogarithmic overhead and can therefore be applied to subpolynomial improvements such as the n 3 / exp(Θ (√ log n ))-time algorithm for the Negative-Weight Triangle problem due to Williams (STOC 2014). Our framework is also general enough to apply to versions of the problems for which more efficient algorithms are known. For example, the Orthogonal Vectors problem over GF( m ) d for constant  m can be solved in time n · poly ( d ) by a result of Williams and Yu (SODA 2014); our result implies that we can approximately count the number of orthogonal pairs with essentially the same running time. We also provide a fine-grained reduction from approximate #SAT to SAT. Suppose the Strong Exponential Time Hypothesis (SETH) is false, so that for some 1 < c < 2 and all k there is an O ( c n )-time algorithm for k -SAT. Then we prove that for all k , there is an O (( c + o (1)) n )-time algorithm for approximate # k -SAT. In particular, our result implies that the Exponential Time Hypothesis (ETH) is equivalent to the seemingly weaker statement that there is no algorithm to approximate #3-SAT to within a factor of 1+ɛ in time 2 o ( n )/ ɛ 2 (taking ɛ > 0 as part of the input).


2014 ◽  
Vol 10 (4) ◽  
pp. 1-32 ◽  
Author(s):  
Holger Dell ◽  
Thore Husfeldt ◽  
Dániel Marx ◽  
Nina Taslaman ◽  
Martin Wahlén

Author(s):  
Konrad K. Dabrowski ◽  
Peter Jonsson ◽  
Sebastian Ordyniak ◽  
George Osipov

Expressive temporal reasoning formalisms are essential for AI. One family of such formalisms consists of disjunctive extensions of the simple temporal problem (STP). Such extensions are well studied in the literature and they have many important applications. It is known that deciding satisfiability of disjunctive STPs is NP-hard, while the fine-grained complexity of such problems is virtually unexplored. We present novel algorithms that exploit structural properties of the solution space and prove, assuming the Exponential-Time Hypothesis, that their worst-case time complexity is close to optimal. Among other things, we make progress towards resolving a long-open question concerning whether Allen's interval algebra can be solved in single-exponential time, by giving a 2^{O(nloglog(n))} algorithm for the special case of unit-length intervals.


Author(s):  
Pierre-Loïc Garoche

This chapter aims at providing the intuition behind convex optimization algorithms and addresses their effective use with floating-point implementation. It first briefly presents the algorithms, assuming a real semantics. As outlined in Chapter 4, convex conic programming is supported by different methods depending on the cone considered. The most known approach for linear constraints is the simplex method by Dantzig. While having an exponential-time complexity with respect to the number of constraints, the simplex method performs well in general. Another method is the set of interior point methods, initially proposed by Karmarkar and made popular by Nesterov and Nemirovski. They can be characterized as path-following methods in which a sequence of local linear problems are solved, typically by Newton's method. After these algorithms are considered, the chapter discusses approaches to obtain sound results.


Sign in / Sign up

Export Citation Format

Share Document