Distributed Computing
Latest Publications


TOTAL DOCUMENTS

763
(FIVE YEARS 65)

H-INDEX

54
(FIVE YEARS 2)

Published By Springer-Verlag

1432-0452, 0178-2770

Author(s):  
Armando Castañeda ◽  
Yannai A. Gonczarowski ◽  
Yoram Moses

Author(s):  
Emin Karayel ◽  
Edgar Gonzàlez

AbstractCommutative Replicated Data Types (CRDTs) are a promising new class of data structures for large-scale shared mutable content in applications that only require eventual consistency. The WithOut Operational Transforms (WOOT) framework is the first CRDT for collaborative text editing introduced by Oster et al. (In: Conference on Computer Supported Cooperative Work (CSCW). ACM, New York, pp 259–268, 2006a). Its eventual consistency property was verified only for a bounded model to date. While the consistency of many other previously published CRDTs had been shown immediately with their publication, the property for WOOT remained open for 14 years. We use a novel approach identifying a previously unknown sort-key based protocol that simulates the WOOT framework to show its consistency. We formalize the proof using the Isabelle/HOL proof assistant to machine-check its correctness.


Author(s):  
Othon Michail ◽  
George Skretas ◽  
Paul G. Spirakis

AbstractWe study here systems of distributed entities that can actively modify their communication network. This gives rise to distributed algorithms that apart from communication can also exploit network reconfiguration to carry out a given task. Also, the distributed task itself may now require a global reconfiguration from a given initial network $$G_s$$ G s to a target network $$G_f$$ G f from a desirable family of networks. To formally capture costs associated with creating and maintaining connections, we define three edge-complexity measures: the total edge activations, the maximum activated edges per round, and the maximum activated degree of a node. We give (poly)log(n) time algorithms for the task of transforming any $$G_s$$ G s into a $$G_f$$ G f of diameter (poly)log(n), while minimizing the edge-complexity. Our main lower bound shows that $$\varOmega (n)$$ Ω ( n ) total edge activations and $$\varOmega (n/\log n)$$ Ω ( n / log n ) activations per round must be paid by any algorithm (even centralized) that achieves an optimum of $$\varTheta (\log n)$$ Θ ( log n ) rounds. We give three distributed algorithms for our general task. The first runs in $$O(\log n)$$ O ( log n ) time, with at most 2n active edges per round, a total of $$O(n\log n)$$ O ( n log n ) edge activations, a maximum degree $$n-1$$ n - 1 , and a target network of diameter 2. The second achieves bounded degree by paying an additional logarithmic factor in time and in total edge activations. It gives a target network of diameter $$O(\log n)$$ O ( log n ) and uses O(n) active edges per round. Our third algorithm shows that if we slightly increase the maximum degree to polylog(n) then we can achieve $$o(\log ^2 n)$$ o ( log 2 n ) running time.


Author(s):  
Talya Eden ◽  
Nimrod Fiat ◽  
Orr Fischer ◽  
Fabian Kuhn ◽  
Rotem Oshman

Author(s):  
Bogdan S. Chlebus ◽  
Dariusz R. Kowalski ◽  
Shailesh Vaya
Keyword(s):  

Author(s):  
Costas Busch ◽  
Maurice Herlihy ◽  
Miroslav Popovic ◽  
Gokarna Sharma

Author(s):  
Matthias Volk ◽  
Borzoo Bonakdarpour ◽  
Joost-Pieter Katoen ◽  
Saba Aflaki

AbstractRandomization is a key concept in distributed computing to tackle impossibility results. This also holds for self-stabilization in anonymous networks where coin flips are often used to break symmetry. Although the use of randomization in self-stabilizing algorithms is rather common, it is unclear what the optimal coin bias is so as to minimize the expected convergence time. This paper proposes a technique to automatically synthesize this optimal coin bias. Our algorithm is based on a parameter synthesis approach from the field of probabilistic model checking. It over- and under-approximates a given parameter region and iteratively refines the regions with minimal convergence time up to the desired accuracy. We describe the technique in detail and present a simple parallelization that gives an almost linear speed-up. We show the applicability of our technique to determine the optimal bias for the well-known Herman’s self-stabilizing token ring algorithm. Our synthesis obtains that for small rings, a fair coin is optimal, whereas for larger rings a biased coin is optimal where the bias grows with the ring size. We also analyze a variant of Herman’s algorithm that coincides with the original algorithm but deviates for biased coins. Finally, we show how using speed reducers in Herman’s protocol improve the expected convergence time.


2021 ◽  
Vol 34 (5) ◽  
pp. 319-348
Author(s):  
Duong Nguyen ◽  
Sorrachai Yingchareonthawornchai ◽  
Vidhya Tekken Valapil ◽  
Sandeep S. Kulkarni ◽  
Murat Demirbas
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document