scholarly journals Fine-Grained Complexity of Temporal Problems

Author(s):  
Konrad K. Dabrowski ◽  
Peter Jonsson ◽  
Sebastian Ordyniak ◽  
George Osipov

Expressive temporal reasoning formalisms are essential for AI. One family of such formalisms consists of disjunctive extensions of the simple temporal problem (STP). Such extensions are well studied in the literature and they have many important applications. It is known that deciding satisfiability of disjunctive STPs is NP-hard, while the fine-grained complexity of such problems is virtually unexplored. We present novel algorithms that exploit structural properties of the solution space and prove, assuming the Exponential-Time Hypothesis, that their worst-case time complexity is close to optimal. Among other things, we make progress towards resolving a long-open question concerning whether Allen's interval algebra can be solved in single-exponential time, by giving a 2^{O(nloglog(n))} algorithm for the special case of unit-length intervals.

2020 ◽  
Vol 34 (09) ◽  
pp. 13700-13703
Author(s):  
Nikhil Vyas ◽  
Ryan Williams

All known SAT-solving paradigms (backtracking, local search, and the polynomial method) only yield a 2n(1−1/O(k)) time algorithm for solving k-SAT in the worst case, where the big-O constant is independent of k. For this reason, it has been hypothesized that k-SAT cannot be solved in worst-case 2n(1−f(k)/k) time, for any unbounded ƒ : ℕ → ℕ. This hypothesis has been called the “Super-Strong Exponential Time Hypothesis” (Super Strong ETH), modeled after the ETH and the Strong ETH. We prove two results concerning the Super-Strong ETH:1. It has also been hypothesized that k-SAT is hard to solve for randomly chosen instances near the “critical threshold”, where the clause-to-variable ratio is 2k ln 2 −Θ(1). We give a randomized algorithm which refutes the Super-Strong ETH for the case of random k-SAT and planted k-SAT for any clause-to-variable ratio. In particular, given any random k-SAT instance F with n variables and m clauses, our algorithm decides satisfiability for F in 2n(1−Ω( log k)/k) time, with high probability (over the choice of the formula and the randomness of the algorithm). It turns out that a well-known algorithm from the literature on SAT algorithms does the job: the PPZ algorithm of Paturi, Pudlak, and Zane (1998).2. The Unique k-SAT problem is the special case where there is at most one satisfying assignment. It is natural to hypothesize that the worst-case (exponential-time) complexity of Unique k-SAT is substantially less than that of k-SAT. Improving prior reductions, we show the time complexities of Unique k-SAT and k-SAT are very tightly related: if Unique k-SAT is in 2n(1−f(k)/k) time for an unbounded f, then k-SAT is in 2n(1−f(k)(1−ɛ)/k) time for every ɛ > 0. Thus, refuting Super Strong ETH in the unique solution case would refute Super Strong ETH in general.


2007 ◽  
Vol 18 (04) ◽  
pp. 715-725
Author(s):  
CÉDRIC BASTIEN ◽  
JUREK CZYZOWICZ ◽  
WOJCIECH FRACZAK ◽  
WOJCIECH RYTTER

Simple grammar reduction is an important component in the implementation of Concatenation State Machines (a hardware version of stateless push-down automata designed for wire-speed network packet classification). We present a comparison and experimental analysis of the best-known algorithms for grammar reduction. There are two approaches to this problem: one processing compressed strings without decompression and another one which processes strings explicitly. It turns out that the second approach is more efficient in the considered practical scenario despite having worst-case exponential time complexity (while the first one is polynomial). The study has been conducted in the context of network packet classification, where simple grammars are used for representing the classification policies.


2017 ◽  
Vol 27 (01n02) ◽  
pp. 85-119 ◽  
Author(s):  
Karl Bringmann ◽  
Marvin Künnemann

The Fréchet distance is a well studied and very popular measure of similarity of two curves. The best known algorithms have quadratic time complexity, which has recently been shown to be optimal assuming the Strong Exponential Time Hypothesis (SETH) [Bringmann, FOCS'14]. To overcome the worst-case quadratic time barrier, restricted classes of curves have been studied that attempt to capture realistic input curves. The most popular such class are [Formula: see text]-packed curves, for which the Fréchet distance has a [Formula: see text]-approximation in time [Formula: see text] [Driemel et al., DCG'12]. In dimension [Formula: see text] this cannot be improved to [Formula: see text] for any [Formula: see text] unless SETH fails [Bringmann, FOCS'14]. In this paper, exploiting properties that prevent stronger lower bounds, we present an improved algorithm with time complexity [Formula: see text]. This improves upon the algorithm by Driemel et al. for any [Formula: see text]. Moreover, our algorithm's dependence on [Formula: see text], [Formula: see text] and [Formula: see text] is optimal in high dimensions apart from lower order factors, unless SETH fails. Our main new ingredients are as follows: For filling the classical free-space diagram we project short subcurves onto a line, which yields one-dimensional separated curves with roughly the same pairwise distances between vertices. Then we tackle this special case in near-linear time by carefully extending a greedy algorithm for the Fréchet distance of one-dimensional separated curves.


2012 ◽  
Vol 23 (02) ◽  
pp. 357-374 ◽  
Author(s):  
PÉTER BURCSI ◽  
FERDINANDO CICALESE ◽  
GABRIELE FICI ◽  
ZSUZSANNA LIPTÁK

The Parikh vector p(s) of a string s over a finite ordered alphabet Σ = {a1, …, aσ} is defined as the vector of multiplicities of the characters, p(s) = (p1, …, pσ), where pi = |{j | sj = ai}|. Parikh vector q occurs in s if s has a substring t with p(t) = q. The problem of searching for a query q in a text s of length n can be solved simply and worst-case optimally with a sliding window approach in O(n) time. We present two novel algorithms for the case where the text is fixed and many queries arrive over time. The first algorithm only decides whether a given Parikh vector appears in a binary text. It uses a linear size data structure and decides each query in O(1) time. The preprocessing can be done trivially in Θ(n2) time. The second algorithm finds all occurrences of a given Parikh vector in a text over an arbitrary alphabet of size σ ≥ 2 and has sub-linear expected time complexity. More precisely, we present two variants of the algorithm, both using an O(n) size data structure, each of which can be constructed in O(n) time. The first solution is very simple and easy to implement and leads to an expected query time of [Formula: see text], where m = ∑i qi is the length of a string with Parikh vector q. The second uses wavelet trees and improves the expected runtime to [Formula: see text], i.e., by a factor of log m.


Author(s):  
Nadia Labai ◽  
Magdalena Ortiz ◽  
Mantas Šimkus

Concrete domains, especially those that allow to compare features with numeric values, have long been recognized as a very desirable extension of description logics (DLs), and significant efforts have been invested into adding them to usual DLs while keeping the complexity of reasoning in check. For expressive DLs and in the presence of general TBoxes, for standard reasoning tasks like consistency, the most general decidability results are for the so-called ω-admissible domains, which are required to be dense. Supporting non-dense domains for features that range over integers or natural numbers remained largely open, despite often being singled out as a highly desirable extension. The decidability of some extensions of ALC with non-dense domains has been shown, but existing results rely on powerful machinery that does not allow to infer any elementary bounds on the complexity of the problem. In this paper, we study an extension of ALC with a rich integer domain that allows for comparisons (between features, and between features and constants coded in unary), and prove that consistency can be solved using automata-theoretic techniques in single exponential time, and thus has no higher worst-case complexity than standard ALC. Our upper bounds apply to some extensions of DLs with concrete domains known from the literature, support general TBoxes, and allow for comparing values along paths of ordinary (not necessarily functional) roles.


2021 ◽  
Vol 13 (1) ◽  
pp. 1-32
Author(s):  
Peter Jonsson ◽  
Victor Lagerkvist ◽  
Biman Roy

We study the constraint satisfaction problem (CSP) parameterized by a constraint language Γ (CSPΓ) and how the choice of Γ affects its worst-case time complexity. Under the exponential-time hypothesis (ETH), we rule out the existence of subexponential algorithms for finite-domain NP-complete CSPΓ problems. This extends to certain infinite-domain CSPs and structurally restricted problems. For CSPs with finite domain D and where all unary relations are available, we identify a relation S D such that the time complexity of the NP-complete problem CSP({ S D }) is a lower bound for all NP-complete CSPs of this kind. We also prove that the time complexity of CSP({ S D }) strictly decreases when |D| increases (unless the ETH is false) and provide stronger complexity results in the special case when |D|=3.


Quantum ◽  
2020 ◽  
Vol 4 ◽  
pp. 329
Author(s):  
Tomoyuki Morimae ◽  
Suguru Tamaki

It is known that several sub-universal quantum computing models, such as the IQP model, the Boson sampling model, the one-clean qubit model, and the random circuit model, cannot be classically simulated in polynomial time under certain conjectures in classical complexity theory. Recently, these results have been improved to ``fine-grained" versions where even exponential-time classical simulations are excluded assuming certain classical fine-grained complexity conjectures. All these fine-grained results are, however, about the hardness of strong simulations or multiplicative-error sampling. It was open whether any fine-grained quantum supremacy result can be shown for a more realistic setup, namely, additive-error sampling. In this paper, we show the additive-error fine-grained quantum supremacy (under certain complexity assumptions). As examples, we consider the IQP model, a mixture of the IQP model and log-depth Boolean circuits, and Clifford+T circuits. Similar results should hold for other sub-universal models.


Author(s):  
William C. Regli ◽  
Satyandra K. Gupta ◽  
Dana S. Nau

Abstract While automated recognition of features has been attempted for a wide range of applications, no single existing approach possesses the functionality required to perform manufacturability analysis. In this paper, we present a methodology for taking a CAD model of a part and extracting a set of machinable features that contains the complete set of alternative interpretations of the part as collections of MRSEVs (Material Removal Shape Element Volumes, a STEP-based library of machining features). The approach handles a variety of features including those describing holes, pockets, slots, and chamfering and filleting operations. In addition, the approach considers accessibility constraints for these features, has an worst-case algorithmic time complexity quadratic in the number of solid modeling operations, and modifies features recognized to account for available tooling and produce more realistic volumes for manufacturability analysis.


Sign in / Sign up

Export Citation Format

Share Document