scholarly journals On Super Strong ETH

2021 ◽  
Vol 70 ◽  
pp. 473-495
Author(s):  
Nikhil Vyas ◽  
Ryan Williams

Multiple known algorithmic paradigms (backtracking, local search and the polynomial method) only yield a 2n(1-1/O(k)) time algorithm for k-SAT in the worst case. For this reason, it has been hypothesized that the worst-case k-SAT problem cannot be solved in 2n(1-f(k)/k) time for any unbounded function f. This hypothesis has been called the "Super-Strong ETH", modelled after the ETH and the Strong ETH. It has also been hypothesized that k-SAT is hard to solve for randomly chosen instances near the "critical threshold", where the clause-to-variable ratio is such that randomly chosen instances are satisfiable with probability 1/2. We give a randomized algorithm which refutes the Super-Strong ETH for the case of random k-SAT and planted k-SAT for any clause-to-variable ratio. For example, given any random k-SAT instance F with n variables and m clauses, our algorithm decides satisfiability for F in  2n(1-c*log(k)/k) time with high probability (over the choice of the formula and the randomness of the algorithm). It turns out that a well-known algorithm from the literature on SAT algorithms does the job: the PPZ algorithm of Paturi, Pudlak, and Zane (1999).   The Unique k-SAT problem is the special case where there is at most one satisfying assignment. Improving prior reductions, we show that the Super-Strong ETHs for Unique k-SAT and k-SAT are equivalent. More precisely, we show the time complexities of Unique k-SAT and k-SAT are very tightly correlated: if Unique k-SAT is in  2n(1-f(k)/k) time for an unbounded f, then k-SAT is in 2n(1-f(k)/(2k)) time.

2020 ◽  
Vol 34 (09) ◽  
pp. 13700-13703
Author(s):  
Nikhil Vyas ◽  
Ryan Williams

All known SAT-solving paradigms (backtracking, local search, and the polynomial method) only yield a 2n(1−1/O(k)) time algorithm for solving k-SAT in the worst case, where the big-O constant is independent of k. For this reason, it has been hypothesized that k-SAT cannot be solved in worst-case 2n(1−f(k)/k) time, for any unbounded ƒ : ℕ → ℕ. This hypothesis has been called the “Super-Strong Exponential Time Hypothesis” (Super Strong ETH), modeled after the ETH and the Strong ETH. We prove two results concerning the Super-Strong ETH:1. It has also been hypothesized that k-SAT is hard to solve for randomly chosen instances near the “critical threshold”, where the clause-to-variable ratio is 2k ln 2 −Θ(1). We give a randomized algorithm which refutes the Super-Strong ETH for the case of random k-SAT and planted k-SAT for any clause-to-variable ratio. In particular, given any random k-SAT instance F with n variables and m clauses, our algorithm decides satisfiability for F in 2n(1−Ω( log k)/k) time, with high probability (over the choice of the formula and the randomness of the algorithm). It turns out that a well-known algorithm from the literature on SAT algorithms does the job: the PPZ algorithm of Paturi, Pudlak, and Zane (1998).2. The Unique k-SAT problem is the special case where there is at most one satisfying assignment. It is natural to hypothesize that the worst-case (exponential-time) complexity of Unique k-SAT is substantially less than that of k-SAT. Improving prior reductions, we show the time complexities of Unique k-SAT and k-SAT are very tightly related: if Unique k-SAT is in 2n(1−f(k)/k) time for an unbounded f, then k-SAT is in 2n(1−f(k)(1−ɛ)/k) time for every ɛ > 0. Thus, refuting Super Strong ETH in the unique solution case would refute Super Strong ETH in general.


2011 ◽  
Vol 21 (01) ◽  
pp. 87-100
Author(s):  
GREG ALOUPIS ◽  
PROSENJIT BOSE ◽  
ERIK D. DEMAINE ◽  
STEFAN LANGERMAN ◽  
HENK MEIJER ◽  
...  

Given a planar polygon (or chain) with a list of edges {e1, e2, e3, …, en-1, en}, we examine the effect of several operations that permute this edge list, resulting in the formation of a new polygon. The main operations that we consider are: reversals which involve inverting the order of a sublist, transpositions which involve interchanging subchains (sublists), and edge-swaps which are a special case and involve interchanging two consecutive edges. When each edge of the given polygon has also been assigned a direction we say that the polygon is signed. In this case any edge involved in a reversal changes direction. We show that a star-shaped polygon can be convexified using O(n2) edge-swaps, while maintaining simplicity, and that this is tight in the worst case. We show that determining whether a signed polygon P can be transformed to one that has rotational or mirror symmetry with P, using transpositions, takes Θ(n log n) time. We prove that the problem of deciding whether transpositions can modify a polygon to fit inside a rectangle is weakly NP-complete. Finally we give an O(n log n) time algorithm to compute the maximum endpoint distance for an oriented chain.


2007 ◽  
Vol DMTCS Proceedings vol. AH,... (Proceedings) ◽  
Author(s):  
Amin Coja-Oghlan ◽  
Michael Krivelevich ◽  
Dan Vilenchik

International audience Finding a satisfying assignment for a $k$-CNF formula $(k \geq 3)$, assuming such exists, is a notoriously hard problem. In this work we consider the uniform distribution over satisfiable $k$-CNF formulas with a linear number of clauses (clause-variable ratio greater than some constant). We rigorously analyze the structure of the space of satisfying assignments of a random formula in that distribution, showing that basically all satisfying assignments are clustered in one cluster, and agree on all but a small, though linear, number of variables. This observation enables us to describe a polynomial time algorithm that finds $\textit{whp}$ a satisfying assignment for such formulas, thus asserting that most satisfiable $k$-CNF formulas are easy (whenever the clause-variable ratio is greater than some constant). This should be contrasted with the setting of very sparse $k$-CNF formulas (which are satisfiable $\textit{whp}$), where experimental results show some regime of clause density to be difficult for many SAT heuristics. One explanation for this phenomena, backed up by partially non-rigorous analytical tools from statistical physics, is the complicated clustering of the solution space at that regime, unlike the more "regular" structure that denser formulas possess. Thus in some sense, our result rigorously supports this explanation.


2021 ◽  
Vol 1 (1) ◽  
pp. 59-77
Author(s):  
Russell Lee ◽  
Jessica Maghakian ◽  
Mohammad Hajiesmaili ◽  
Jian Li ◽  
Ramesh Sitaraman ◽  
...  

This paper studies the online energy scheduling problem in a hybrid model where the cost of energy is proportional to both the volume and peak usage, and where energy can be either locally generated or drawn from the grid. Inspired by recent advances in online algorithms with Machine Learned (ML) advice, we develop parameterized deterministic and randomized algorithms for this problem such that the level of reliance on the advice can be adjusted by a trust parameter. We then analyze the performance of the proposed algorithms using two performance metrics: robustness that measures the competitive ratio as a function of the trust parameter when the advice is inaccurate, and consistency for competitive ratio when the advice is accurate. Since the competitive ratio is analyzed in two different regimes, we further investigate the Pareto optimality of the proposed algorithms. Our results show that the proposed deterministic algorithm is Pareto-optimal, in the sense that no other online deterministic algorithms can dominate the robustness and consistency of our algorithm. Furthermore, we show that the proposed randomized algorithm dominates the Pareto-optimal deterministic algorithm. Our large-scale empirical evaluations using real traces of energy demand, energy prices, and renewable energy generations highlight that the proposed algorithms outperform worst-case optimized algorithms and fully data-driven algorithms.


2020 ◽  
Vol 92 (1) ◽  
pp. 107-132 ◽  
Author(s):  
Britta Schulze ◽  
Michael Stiglmayr ◽  
Luís Paquete ◽  
Carlos M. Fonseca ◽  
David Willems ◽  
...  

Abstract In this article, we introduce the rectangular knapsack problem as a special case of the quadratic knapsack problem consisting in the maximization of the product of two separate knapsack profits subject to a cardinality constraint. We propose a polynomial time algorithm for this problem that provides a constant approximation ratio of 4.5. Our experimental results on a large number of artificially generated problem instances show that the average ratio is far from theoretical guarantee. In addition, we suggest refined versions of this approximation algorithm with the same time complexity and approximation ratio that lead to even better experimental results.


2013 ◽  
Vol 24 (07) ◽  
pp. 1067-1082 ◽  
Author(s):  
YO-SUB HAN ◽  
SANG-KI KO ◽  
KAI SALOMAA

The edit-distance between two strings is the smallest number of operations required to transform one string into the other. The distance between languages L1and L2is the smallest edit-distance between string wi∈ Li, i = 1, 2. We consider the problem of computing the edit-distance of a given regular language and a given context-free language. First, we present an algorithm that finds for the languages an optimal alignment, that is, a sequence of edit operations that transforms a string in one language to a string in the other. The length of the optimal alignment, in the worst case, is exponential in the size of the given grammar and finite automaton. Then, we investigate the problem of computing only the edit-distance of the languages without explicitly producing an optimal alignment. We design a polynomial time algorithm that calculates the edit-distance based on unary homomorphisms.


Algorithmica ◽  
2020 ◽  
Vol 82 (11) ◽  
pp. 3306-3337
Author(s):  
Matti Karppa ◽  
Petteri Kaski ◽  
Jukka Kohonen ◽  
Padraig Ó Catháin

Abstract We derandomize Valiant’s (J ACM 62, Article 13, 2015) subquadratic-time algorithm for finding outlier correlations in binary data. This demonstrates that it is possible to perform a deterministic subquadratic-time similarity join of high dimensionality. Our derandomized algorithm gives deterministic subquadratic scaling essentially for the same parameter range as Valiant’s randomized algorithm, but the precise constants we save over quadratic scaling are more modest. Our main technical tool for derandomization is an explicit family of correlation amplifiers built via a family of zigzag-product expanders by Reingold et al. (Ann Math 155(1):157–187, 2002). We say that a function $$f:\{-1,1\}^d\rightarrow \{-1,1\}^D$$ f : { - 1 , 1 } d → { - 1 , 1 } D is a correlation amplifier with threshold $$0\le \tau \le 1$$ 0 ≤ τ ≤ 1 , error $$\gamma \ge 1$$ γ ≥ 1 , and strength p an even positive integer if for all pairs of vectors $$x,y\in \{-1,1\}^d$$ x , y ∈ { - 1 , 1 } d it holds that (i) $$|\langle x,y\rangle |<\tau d$$ | ⟨ x , y ⟩ | < τ d implies $$|\langle f(x),f(y)\rangle |\le (\tau \gamma )^pD$$ | ⟨ f ( x ) , f ( y ) ⟩ | ≤ ( τ γ ) p D ; and (ii) $$|\langle x,y\rangle |\ge \tau d$$ | ⟨ x , y ⟩ | ≥ τ d implies $$\left (\frac{\langle x,y\rangle }{\gamma d}\right )^pD \le \langle f(x),f(y)\rangle \le \left (\frac{\gamma \langle x,y\rangle }{d}\right )^pD$$ ⟨ x , y ⟩ γ d p D ≤ ⟨ f ( x ) , f ( y ) ⟩ ≤ γ ⟨ x , y ⟩ d p D .


2009 ◽  
Vol 18 (5) ◽  
pp. 775-801 ◽  
Author(s):  
MICHAEL KRIVELEVICH ◽  
BENNY SUDAKOV ◽  
DAN VILENCHIK

In this work we suggest a new model for generating random satisfiable k-CNF formulas. To generate such formulas. randomly permute all $2^k\binom{n}{k}$ possible clauses over the variables x1,. . .,xn, and starting from the empty formula, go over the clauses one by one, including each new clause as you go along if, after its addition, the formula remains satisfiable. We study the evolution of this process, namely the distribution over formulas obtained after scanning through the first m clauses (in the random permutation's order).Random processes with conditioning on a certain property being respected are widely studied in the context of graph properties. This study was pioneered by Ruciński and Wormald in 1992 for graphs with a fixed degree sequence, and also by Erdős, Suen and Winkler in 1995 for triangle-free and bipartite graphs. Since then many other graph properties have been studied, such as planarity and H-freeness. Thus our model is a natural extension of this approach to the satisfiability setting.Our main contribution is as follows. For m ≥ cn, c = c(k) a sufficiently large constant, we are able to characterize the structure of the solution space of a typical formula in this distribution. Specifically, we show that typically all satisfying assignments are essentially clustered in one cluster, and all but e−Ω(m/n)n of the variables take the same value in all satisfying assignments. We also describe a polynomial-time algorithm that finds w.h.p. a satisfying assignment for such formulas.


2017 ◽  
Vol 59 ◽  
pp. 59-101 ◽  
Author(s):  
Tim Roughgarden ◽  
Vasilis Syrgkanis ◽  
Eva Tardos

This survey outlines a general and modular theory for proving approximation guarantees for equilibria of auctions in complex settings. This theory complements traditional economic techniques, which generally focus on exact and optimal solutions and are accordingly limited to relatively stylized settings. We highlight three user-friendly analytical tools: smoothness-type inequalities, which immediately yield approximation guarantees for many auction formats of interest in the special case of complete information and deterministic strategies; extension theorems, which extend such guarantees to randomized strategies, no-regret learning outcomes, and incomplete-information settings; and composition theorems, which extend such guarantees from simpler to more complex auctions. Combining these tools yields tight worst-case approximation guarantees for the equilibria of many widely-used auction formats.


Author(s):  
Cheng He ◽  
Hao Lin ◽  
Li Li

This paper studies a hierarchical optimization problem of scheduling $n$ jobs on a serial-batching machine, in which two objective functions are maximum costs. By a hierarchical optimization problem, we mean the problem of optimizing the secondary criterion under the constraint that the primary criterion is optimized. A serial-batching machine is a machine that can handle up to $b$ jobs in a batch and jobs in a batch start and complete respectively at the same time and the processing time of a batch is equal to the sum of the processing times of jobs in the batch. When a new batch starts, a constant setup time $s$ occurs. We confine ourselves to the bounded model, where $b<n$. We present an $O(n^4)$-time algorithm for this hierarchical optimization problem. For the special case where two objective functions are maximum lateness, we give an $O(n^3\log n)$-time algorithm.


Sign in / Sign up

Export Citation Format

Share Document