scholarly journals Tight Bounds on the Round Complexity of Distributed 1-Solvable Tasks

1991 ◽  
Vol 20 (377) ◽  
Author(s):  
Ofer Biran ◽  
Shlomo Moran ◽  
Shmuel Zaks

<p>A distributed task T is 1-solvable if there exists a protocol that solves it in the presence of (at most) one crash failure. A precise characterization of the 1-solvable tasks was given by the authors in 1990.</p><p>In this paper we determine the number of rounds of communication that are required, in the worst case, by a protocol which 1-solves a given 1-solvable task T for <em>n</em> processors. We define the radius R(T) of T, and show that if R(T) is finite, then this number is Theta (log_n R(T)) ; more precisely, we give a lower bound of log_(n-1) R(T), and an upper bound of 2+|log_(n-1)R(T)| . The upper bound implies, for example, that each of the following tasks: renaming, order preserving renaming and binary monotone consensus can be solved in the presence of one fault in 3 rounds of communications. All previous protocols that 1-solved these tasks required Omega(n) rounds. The result is also generalized to tasks whose radii are not bounded, e.g., the approximate consensus and its variants.</p>

2008 ◽  
Vol Vol. 10 no. 3 ◽  
Author(s):  
Cyril Gavoille ◽  
Nicolas Hanusse

International audience In this paper we show an information-theoretic lower bound of kn - o(kn) on the minimum number of bits to represent an unlabeled simple connected n-node graph of pagenumber k. This has to be compared with the efficient encoding scheme of Munro and Raman of 2kn + 2m + o(kn+m) bits (m the number of edges), that is 4kn + 2n + o(kn) bits in the worst-case. For m-edge graphs of pagenumber k (with multi-edges and loops), we propose a 2mlog2k + O(m) bits encoding improving the best previous upper bound of Munro and Raman whenever m ≤ 1 / 2kn/log2 k. Actually our scheme applies to k-page embedding containing multi-edge and loops. Moreover, with an auxiliary table of o(m log k) bits, our coding supports (1) the computation of the degree of a node in constant time, (2) adjacency queries with O(logk) queries of type rank, select and match, that is in O(logk *minlogk / loglogm, loglogk) time and (3) the access to δ neighbors in O(δ) runs of select, rank or match;.


2014 ◽  
Vol 25 (07) ◽  
pp. 823-835 ◽  
Author(s):  
DANIEL GOČ ◽  
ALEXANDROS PALIOUDAKIS ◽  
KAI SALOMAA

The language [Formula: see text] consists of first halfs of strings in L. Many other variants of a proportional removal operation have been considered in the literature and a characterization of removal operations that preserve regularity is known. We consider the nondeterministic state complexity of the operation [Formula: see text] and, more generally, of polynomial removals as defined by Domaratzki (J. Automata, Languages and Combinatorics 7(4), 2002). We give an O(n2) upper bound for the nondeterministic state complexity of polynomial removals and a matching lower bound in cases where the polynomial is a sum of a monomial and a constant, or when the polynomial has rational roots.


1991 ◽  
Vol 20 (364) ◽  
Author(s):  
O. Gerstel ◽  
Shmuel Zaks

A new characterization of tree medians is presented: we show that a vertex <em>m</em> is a median of a tree <em>T</em> with <em>n</em> vertices iff there exists a partition of the vertex set into [<em>n</em>/2] disjoint pairs (excluding m when <em>n</em> is odd), such that all the paths connecting the two vertices in any of the pairs pass through <em>m</em>. We show that in this case this sum is the largest possible among all such partitions, and we use this fact to discuss lower bounds on the message complexity of the distributed sorting problem. This lower bound implies that, given a network of a tree topology, choosing a median and then route all the information through it is the best possible strategy, in terms of worst-case number of messages sent during any execution of any distributed sorting algorithm. We also discuss the implications for networks of a general topology and for the distributed ranking problem.


10.37236/1521 ◽  
2000 ◽  
Vol 7 (1) ◽  
Author(s):  
Paul J. Tanenbaum

Bound polysemy is the property of any pair $(G_1, G_2)$ of graphs on a shared vertex set $V$ for which there exists a partial order on $V$ such that any pair of vertices has an upper bound precisely when the pair is an edge in $G_1$ and a lower bound precisely when it is an edge in $G_2$. We examine several special cases and prove a characterization of the bound polysemic pairs that illuminates a connection with the squared graphs.


1994 ◽  
Vol 1 (35) ◽  
Author(s):  
Gerth Stølting Brodal

The problem of making bounded in-degree and out-degree data structures partially persistent is considered. The node copying method of Driscoll <em>et al.</em> is extended so that updates can be performed in <em>worst-case</em> constant time on the pointer machine model. Previously it was only known to be possible in amortised constant time [Driscoll89]. The result is presented in terms of a new strategy for Dietz and Raman's dynamic two player pebble game on graphs. It is shown how to implement the strategy, and the upper bound on the required number of pebbles is improved from 2b + 2d + O(sqrt(b)) to d + 2b, where b is the bound of the in-degree and d the bound of the out-degree. We also give a lower bound that shows that the number of pebbles depends on the out-degree d.


Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 65
Author(s):  
Danny Hucke ◽  
Carl Philipp Reh

A grammar-based compressor is an algorithm that receives a word and outputs a context-free grammar that only produces this word. The approximation ratio for a single input word is the size of the grammar produced for this word divided by the size of a smallest grammar for this word. The worst-case approximation ratio of a grammar-based compressor for a given word length is the largest approximation ratio over all input words of that length. In this work, we study the worst-case approximation ratio of the algorithms Greedy, RePair and LongestMatch on unary strings, i.e., strings that only make use of a single symbol. Our main contribution is to show the improved upper bound of O((logn)8·(loglogn)3) for the worst-case approximation ratio of Greedy. In addition, we also show the lower bound of 1.34847194⋯ for the worst-case approximation ratio of Greedy, and that RePair and LongestMatch have a worst-case approximation ratio of log2(3).


2021 ◽  
Author(s):  
Nisha Chopra

Consider a unit disk with two objects at unidentified locations. We examine the problem of two or more robots in search of both objects in the wireless communication model. We begin with two robots and both are needed to carry an object. Subsequently, we design several algorithms that describe robots trajectories in search of the objects. We were able to achieve a minimum worst-case search time of 6.7518 and a lower bound of 3 + π 2 . Additionally, we define two general cases and bound the worst-case search time for both. The first of the cases is for n ≥ 3 robots and an object can be moved by one robot. The second case is where we have n ≥ 3 robots and two robots are needed to carry an object. We achieve an upper bound of 1 + 2π n + sin (⌊n 2 ⌋ π n ) for the first case and an upper bound of 3 + 2π n + sin π n for the second case, with lower bounds of 2 + π n and 3 + π n respectively.


2021 ◽  
Author(s):  
Nisha Chopra

Consider a unit disk with two objects at unidentified locations. We examine the problem of two or more robots in search of both objects in the wireless communication model. We begin with two robots and both are needed to carry an object. Subsequently, we design several algorithms that describe robots trajectories in search of the objects. We were able to achieve a minimum worst-case search time of 6.7518 and a lower bound of 3 + π 2 . Additionally, we define two general cases and bound the worst-case search time for both. The first of the cases is for n ≥ 3 robots and an object can be moved by one robot. The second case is where we have n ≥ 3 robots and two robots are needed to carry an object. We achieve an upper bound of 1 + 2π n + sin (⌊n 2 ⌋ π n ) for the first case and an upper bound of 3 + 2π n + sin π n for the second case, with lower bounds of 2 + π n and 3 + π n respectively.


2020 ◽  
Vol 64 (7) ◽  
pp. 1197-1224
Author(s):  
Florian Stober ◽  
Armin Weiß

AbstractMergeInsertion, also known as the Ford-Johnson algorithm, is a sorting algorithm which, up to today, for many input sizes achieves the best known upper bound on the number of comparisons. Indeed, it gets extremely close to the information-theoretic lower bound. While the worst-case behavior is well understood, only little is known about the average case. This work takes a closer look at the average case behavior. In particular, we establish an upper bound of $n \log n - 1.4005n + o(n)$ n log n − 1.4005 n + o ( n ) comparisons. We also give an exact description of the probability distribution of the length of the chain a given element is inserted into and use it to approximate the average number of comparisons numerically. Moreover, we compute the exact average number of comparisons for n up to 148. Furthermore, we experimentally explore the impact of different decision trees for binary insertion. To conclude, we conduct experiments showing that a slightly different insertion order leads to a better average case and we compare the algorithm to Manacher’s combination of merging and MergeInsertion as well as to the recent combined algorithm with (1,2)-Insertionsort by Iwama and Teruyama.


Sign in / Sign up

Export Citation Format

Share Document