scholarly journals Lower Bounds for Dynamic Algebraic Problems

1998 ◽  
Vol 5 (11) ◽  
Author(s):  
Gudmund Skovbjerg Frandsen ◽  
Johan P. Hansen ◽  
Peter Bro Miltersen

We consider dynamic evaluation of algebraic functions (matrix multiplication, determinant, convolution, Fourier transform, etc.) in the model of Reif and Tate; i.e., if f(x1, . . . , xn) = (y1, . . . , ym) is an algebraic problem, we consider serving on-line requests of the form "change input xi to value v" or "what is the value of output yi?". We present techniques for showing lower bounds on the worst case time complexity per operation for such problems. The first gives lower bounds in a wide range of rather powerful models (for instance history dependent<br />algebraic computation trees over any infinite subset of a field, the integer RAM, and the generalized real RAM model of Ben-Amram and Galil). Using this technique, we show optimal  Omega(n) bounds for dynamic matrix-vector product, dynamic matrix multiplication and dynamic discriminant and an <br />Omega(sqrt(n)) lower bound for dynamic polynomial multiplication (convolution), providing a good match with Reif and<br />Tate's O(sqrt(n log n)) upper bound. We also show linear lower bounds for dynamic determinant, matrix adjoint and matrix inverse and an Omega(sqrt(n)) lower bound for the elementary symmetric functions. The second technique is the communication complexity technique of Miltersen, Nisan, Safra, and Wigderson which we apply to the setting<br />of dynamic algebraic problems, obtaining similar lower bounds in the word RAM model. The third technique gives lower bounds in the weaker straight line program model. Using this technique, we show an ((log n)2= log log n) lower bound for dynamic discrete Fourier transform. Technical ingredients of our techniques are the incompressibility technique of Ben-Amram and Galil and the lower bound for depth-two superconcentrators of Radhakrishnan and Ta-Shma. The incompressibility technique is extended to arithmetic computation in arbitrary fields.

1996 ◽  
Vol 3 (9) ◽  
Author(s):  
Thore Husfeldt ◽  
Theis Rauhe ◽  
Søren Skyum

We give a number of new lower bounds in the cell probe model<br />with logarithmic cell size, which entails the same bounds on the random access computer with logarithmic word size and unit cost operations. We study the signed prefix sum problem: given a string of length n of zeroes and signed ones, compute the sum of its ith prefix during updates. We show a<br />lower bound of  Omega(log n/log log n) time per operations, even if the prefix sums are bounded by log n/log log n during all updates. We also show that if the update time is bounded by the product of the worst-case update time and the<br />answer to the query, then the update time must be Omega(sqrt(log n/ log log n)).<br /> These results allow us to prove lower bounds for a variety of seemingly unrelated<br />dynamic problems. We give a lower bound for the dynamic planar point location in monotone subdivisions of <br />Omega(log n/ log log n) per operation. We give<br />a lower bound for the dynamic transitive closure problem on upward planar graphs with one source and one sink of <br />Omega(log n/(log logn)^2) per operation. We give a lower bound of  Omega(sqrt(log n/log log n)) for the dynamic membership problem of any Dyck language with two or more letters. This implies the same<br />lower bound for the dynamic word problem for the free group with k generators. We also give lower bounds for the dynamic prefix majority and prefix equality problems.


1997 ◽  
Vol 62 (3) ◽  
pp. 708-728 ◽  
Author(s):  
Maria Bonet ◽  
Toniann Pitassi ◽  
Ran Raz

AbstractWe consider small-weight Cutting Planes (CP*) proofs; that is, Cutting Planes (CP) proofs with coefficients up to Poly(n). We use the well known lower bounds for monotone complexity to prove an exponential lower bound for the length of CP* proofs, for a family of tautologies based on the clique function. Because Resolution is a special case of small-weight CP, our method also gives a new and simpler exponential lower bound for Resolution.We also prove the following two theorems: (1) Tree-like CP* proofs cannot polynomially simulate non-tree-like CP* proofs. (2) Tree-like CP* proofs and Bounded-depth-Frege proofs cannot polynomially simulate each other.Our proofs also work for some generalizations of the CP* proof system. In particular, they work for CP* with a deduction rule, and also for any proof system that allows any formula with small communication complexity, and any set of sound rules of inference.


2021 ◽  
Vol 22 (4) ◽  
pp. 1-30
Author(s):  
Sam Buss ◽  
Dmitry Itsykson ◽  
Alexander Knop ◽  
Artur Riazanov ◽  
Dmitry Sokolov

This article is motivated by seeking lower bounds on OBDD(∧, w, r) refutations, namely, OBDD refutations that allow weakening and arbitrary reorderings. We first work with 1 - NBP ∧ refutations based on read-once nondeterministic branching programs. These generalize OBDD(∧, r) refutations. There are polynomial size 1 - NBP(∧) refutations of the pigeonhole principle, hence 1-NBP(∧) is strictly stronger than OBDD}(∧, r). There are also formulas that have polynomial size tree-like resolution refutations but require exponential size 1-NBP(∧) refutations. As a corollary, OBDD}(∧, r) does not simulate tree-like resolution, answering a previously open question. The system 1-NBP(∧, ∃) uses projection inferences instead of weakening. 1-NBP(∧, ∃ k is the system restricted to projection on at most k distinct variables. We construct explicit constant degree graphs G n on n vertices and an ε > 0, such that 1-NBP(∧, ∃ ε n ) refutations of the Tseitin formula for G n require exponential size. Second, we study the proof system OBDD}(∧, w, r ℓ ), which allows ℓ different variable orders in a refutation. We prove an exponential lower bound on the complexity of tree-like OBDD(∧, w, r ℓ ) refutations for ℓ = ε log n , where n is the number of variables and ε > 0 is a constant. The lower bound is based on multiparty communication complexity.


2001 ◽  
Vol 11 (04) ◽  
pp. 401-421 ◽  
Author(s):  
ALEJANDRO LÓPEZ-ORTIZ ◽  
SVEN SCHUIERER

We present lower bounds for on-line searching problems in two special classes of simple polygons called streets and generalized streets. In streets we assume that the location of the target is known to the robot in advance and prove a lower bound of [Formula: see text] on the competitive ratio of any deterministic search strategy—which can be shown to be tight. For generalized streets we show that if the location of the target is not known, then there is a class of orthogonal generalized streets for which the competitive ratio of any search strategy is at least [Formula: see text] in the L2-metric—again matching the competitive ratio of the best known algorithm. We also show that if the location of the target is known, then the competitive ratio for searching in generalized streets in the L1-metric is at least 9 which is tight as well. The former result is based on a lower bound on the average competitive ratio of searching on the real line if an upper bound of D to the target is given. We show that in this case the average competitive ratio is at least 9-O(1/ log D).


Author(s):  
Elvira Albert ◽  
Samir Genaim ◽  
Enrique Martin-Martin ◽  
Alicia Merayo ◽  
Albert Rubio

AbstractThis paper presents a new framework to synthesize lower-bounds on the worst-case cost for non-deterministic integer loops. As in previous approaches, the analysis searches for a metering function that under-approximates the number of loop iterations. The key novelty of our framework is the specialization of loops, which is achieved by restricting their enabled transitions to a subset of the inputs combined with the narrowing of their transition scopes. Specialization allows us to find metering functions for complex loops that could not be handled before or be more precise than previous approaches. Technically, it is performed (1) by using quasi-invariants while searching for the metering function, (2) by strengthening the loop guards, and (3) by narrowing the space of non-deterministic choices. We also propose a Max-SMT encoding that takes advantage of the use of soft constraints to force the solver look for more accurate solutions. We show our accuracy gains on benchmarks extracted from the 2020 Termination and Complexity Competition by comparing our results to those obtained by the "Image missing" system.


1992 ◽  
Vol 21 (396) ◽  
Author(s):  
Peter Bro Miltersen

The bit probe complexity of a static data structure problem within a given size bound was defined by Elias and Flower. It is the number of bits one needs to probe in the data structure for worst case data and query with an optimal encoding of the data within the space bound. We make some furtber investigations into the properties of the bit probe complexity measure. We determine the complexity of the full problem, which is the problem where every possible query is allowed, within an additive constant. We show a trade off-between structure size and the number of bit probes for all problems. We show that the complexity of almost every problem, even with small query sets, equals that of the full problem. We show how communication complexity can be used to give small, but occasionally tight lower bounds for natural functions. We define the class of access feasible static structure problems and conjecture that not every polynomial time computable problem is access feasible. We show a link to dynamic problems by showing that if polynomial time computable functions without feasible static structures exist, then there are problems in P which can not be reevaluated efficiently on-line.


2018 ◽  
Vol 19 (3) ◽  
pp. 275-292
Author(s):  
Daniel Langr ◽  
Ivan Šimeček

The presented study analyses memory footprints of 563 representative benchmark sparse matrices with respect to their partitioning into uniformly-sized blocks. Different block sizes and different ways of storing blocks in memory are considered and statistically evaluated. Memory footprints of partitioned matrices are then compared with their lower bounds and CSR, index-compressed CSR, and EBF storage formats. The results show that blocking-based storage formats may significantly reduce memory footprints of sparse matrices arising from a wide range of application domains. Additionally, measured consistency of results is presented and discussed, benefits of individual formats for storing blocks are evaluated, and an analysis of best-case and worst-case matrices is provided for in-depth understanding of causes of memory savings of blocking-based formats.


2008 ◽  
Vol 8 (1&2) ◽  
pp. 82-95
Author(s):  
D. Gavinsky

Despite the apparent similarity between shared randomness and shared entanglement in the context of Communication Complexity, our understanding of the latter is not as good as of the former. In particular, there is no known ``entanglement analogue'' for the famous theorem by Newman, saying that the number of shared random bits required for solving any communication problem can be at most logarithmic in the input length (i.e., using more than $\asO[]{\log n}$ shared random bits would not reduce the complexity of an optimal solution). In this paper we prove that the same is not true for entanglement. We establish a wide range of tight (up to a polylogarithmic factor) entanglement vs.\ communication trade-offs for relational problems. The low end is:\ for any $t>2$, reducing shared entanglement from $log^tn$ to $\aso[]{log^{t-2}n}$ qubits can increase the communication required for solving a problem almost exponentially, from $\asO[]{log^tn}$ to $\asOm[]{\sqrt n}$. The high end is:\ for any $\eps>0$, reducing shared entanglement from $n^{1-\eps}\log n$ to $\aso[]{n^{1-\eps}/\log n}$ can increase the required communication from $\asO[]{n^{1-\eps}\log n}$ to $\asOm[]{n^{1-\eps/2}/\log n}$. The upper bounds are demonstrated via protocols which are \e{exact} and work in the \e{simultaneous message passing model}, while the lower bounds hold for \e{bounded-error protocols}, even in the more powerful \e{model of 1-way communication}. Our protocols use shared EPR pairs while the lower bounds apply to any sort of prior entanglement. We base the lower bounds on a strong direct product theorem for communication complexity of a certain class of relational problems. We believe that the theorem might have applications outside the scope of this work.


2021 ◽  
Vol 8 (2) ◽  
pp. 1-28
Author(s):  
Gopal Pandurangan ◽  
Peter Robinson ◽  
Michele Scquizzato

Motivated by the increasing need to understand the distributed algorithmic foundations of large-scale graph computations, we study some fundamental graph problems in a message-passing model for distributed computing where k ≥ 2 machines jointly perform computations on graphs with n nodes (typically, n >> k). The input graph is assumed to be initially randomly partitioned among the k machines, a common implementation in many real-world systems. Communication is point-to-point, and the goal is to minimize the number of communication rounds of the computation. Our main contribution is the General Lower Bound Theorem , a theorem that can be used to show non-trivial lower bounds on the round complexity of distributed large-scale data computations. This result is established via an information-theoretic approach that relates the round complexity to the minimal amount of information required by machines to solve the problem. Our approach is generic, and this theorem can be used in a “cookbook” fashion to show distributed lower bounds for several problems, including non-graph problems. We present two applications by showing (almost) tight lower bounds on the round complexity of two fundamental graph problems, namely, PageRank computation and triangle enumeration . These applications show that our approach can yield lower bounds for problems where the application of communication complexity techniques seems not obvious or gives weak bounds, including and especially under a stochastic partition of the input. We then present distributed algorithms for PageRank and triangle enumeration with a round complexity that (almost) matches the respective lower bounds; these algorithms exhibit a round complexity that scales superlinearly in k , improving significantly over previous results [Klauck et al., SODA 2015]. Specifically, we show the following results: PageRank: We show a lower bound of Ὼ(n/k 2 ) rounds and present a distributed algorithm that computes an approximation of the PageRank of all the nodes of a graph in Õ(n/k 2 ) rounds. Triangle enumeration: We show that there exist graphs with m edges where any distributed algorithm requires Ὼ(m/k 5/3 ) rounds. This result also implies the first non-trivial lower bound of Ὼ(n 1/3 ) rounds for the congested clique model, which is tight up to logarithmic factors. We then present a distributed algorithm that enumerates all the triangles of a graph in Õ(m/k 5/3 + n/k 4/3 ) rounds.


2016 ◽  
Vol 26 (02) ◽  
pp. 89-110 ◽  
Author(s):  
Adrian Dumitrescu ◽  
Anirban Ghosh

(I) We exhibit a set of 23 points in the plane that has dilation at least [Formula: see text], improving the previous best lower bound of [Formula: see text] for the worst-case dilation of plane spanners. (II) For every [Formula: see text], there exists an [Formula: see text]-element point set [Formula: see text] such that the degree [Formula: see text] dilation of [Formula: see text] equals [Formula: see text] in the domain of plane geometric spanners. In the same domain, we show that for every [Formula: see text], there exists a an [Formula: see text]-element point set [Formula: see text] such that the degree [Formula: see text] dilation of [Formula: see text] equals [Formula: see text] The previous best lower bound of [Formula: see text] holds for any degree. (III) For every [Formula: see text], there exists an [Formula: see text]-element point set [Formula: see text] such that the stretch factor of the greedy triangulation of [Formula: see text] is at least [Formula: see text].


Sign in / Sign up

Export Citation Format

Share Document