scholarly journals Graph Sparsification for Derandomizing Massively Parallel Computation with Low Space

2021 ◽  
Vol 17 (2) ◽  
pp. 1-27
Author(s):  
Artur Czumaj ◽  
Peter Davies ◽  
Merav Parter

The Massively Parallel Computation (MPC) model is an emerging model that distills core aspects of distributed and parallel computation, developed as a tool to solve combinatorial (typically graph) problems in systems of many machines with limited space. Recent work has focused on the regime in which machines have sublinear (in n , the number of nodes in the input graph) space, with randomized algorithms presented for the fundamental problems of Maximal Matching and Maximal Independent Set. However, there have been no prior corresponding deterministic algorithms. A major challenge underlying the sublinear space setting is that the local space of each machine might be too small to store all edges incident to a single node. This poses a considerable obstacle compared to classical models in which each node is assumed to know and have easy access to its incident edges. To overcome this barrier, we introduce a new graph sparsification technique that deterministically computes a low-degree subgraph, with the additional property that solving the problem on this subgraph provides significant progress towards solving the problem for the original input graph. Using this framework to derandomize the well-known algorithm of Luby [SICOMP’86], we obtain O (log Δ + log log  n )-round deterministic MPC algorithms for solving the problems of Maximal Matching and Maximal Independent Set with O ( n ɛ ) space on each machine for any constant ɛ > 0. These algorithms also run in O (log Δ) rounds in the closely related model of CONGESTED CLIQUE, improving upon the state-of-the-art bound of O (log  2 Δ) rounds by Censor-Hillel et al. [DISC’17].

2021 ◽  
Vol 32 (01) ◽  
pp. 93-114
Author(s):  
Vadim E. Levit ◽  
David Tankus

A graph [Formula: see text] is well-covered if all its maximal independent sets are of the same cardinality. Assume that a weight function [Formula: see text] is defined on its vertices. Then [Formula: see text] is [Formula: see text]well-covered if all maximal independent sets are of the same weight. For every graph [Formula: see text], the set of weight functions [Formula: see text] such that [Formula: see text] is [Formula: see text]-well-covered is a vector space, denoted as WCW(G). Deciding whether an input graph [Formula: see text] is well-covered is co-NP-complete. Therefore, finding WCW(G) is co-NP-hard. A generating subgraph of a graph [Formula: see text] is an induced complete bipartite subgraph [Formula: see text] of [Formula: see text] on vertex sets of bipartition [Formula: see text] and [Formula: see text], such that each of [Formula: see text] and [Formula: see text] is a maximal independent set of [Formula: see text], for some independent set [Formula: see text]. If [Formula: see text] is generating, then [Formula: see text] for every weight function [Formula: see text]. Therefore, generating subgraphs play an important role in finding WCW(G). The decision problem whether a subgraph of an input graph is generating is known to be NP-complete. In this article we prove NP- completeness of the problem for graphs without cycles of length 3 and 5, and for bipartite graphs with girth at least 6. On the other hand, we supply polynomial algorithms for recognizing generating subgraphs and finding WCW(G), when the input graph is bipartite without cycles of length 6. We also present a polynomial algorithm which finds WCW(G) when [Formula: see text] does not contain cycles of lengths 3, 4, 5, and 7.


2021 ◽  
Vol 68 (5) ◽  
pp. 1-30
Author(s):  
Alkida Balliu ◽  
Sebastian Brandt ◽  
Juho Hirvonen ◽  
Dennis Olivetti ◽  
Mikaël Rabie ◽  
...  

There are distributed graph algorithms for finding maximal matchings and maximal independent sets in O ( Δ + log * n ) communication rounds; here, n is the number of nodes and Δ is the maximum degree. The lower bound by Linial (1987, 1992) shows that the dependency on n is optimal: These problems cannot be solved in o (log * n ) rounds even if Δ = 2. However, the dependency on Δ is a long-standing open question, and there is currently an exponential gap between the upper and lower bounds. We prove that the upper bounds are tight. We show that any algorithm that finds a maximal matching or maximal independent set with probability at least 1-1/ n requires Ω (min { Δ , log log n / log log log n }) rounds in the LOCAL model of distributed computing. As a corollary, it follows that any deterministic algorithm that finds a maximal matching or maximal independent set requires Ω (min { Δ , log n / log log n }) rounds; this is an improvement over prior lower bounds also as a function of  n .


2008 ◽  
Vol 18 (01) ◽  
pp. 189-199 ◽  
Author(s):  
WAYNE GODDARD ◽  
STEPHEN T. HEDETNIEMI ◽  
DAVID P. JACOBS ◽  
PRADIP K. SRIMANI ◽  
ZHENYU XU

We provide self-stabilizing algorithms to obtain and maintain a maximal matching, maximal independent set or minimal dominating set in a given system graph. They converge in linear rounds under a distributed or synchronous daemon. They can be implemented in an ad hoc network by piggy-backing on the beacon messages that nodes already use.


2021 ◽  
Vol 8 (3) ◽  
pp. 1-25
Author(s):  
Soheil Behnezhad ◽  
Laxman Dhulipala ◽  
Hossein Esfandiari ◽  
Jakub Łącki ◽  
Vahab Mirrokni ◽  
...  

We introduce the Adaptive Massively Parallel Computation (AMPC) model, which is an extension of the Massively Parallel Computation (MPC) model. At a high level, the AMPC model strengthens the MPC model by storing all messages sent within a round in a distributed data store. In the following round, all machines are provided with random read access to the data store, subject to the same constraints on the total amount of communication as in the MPC model. Our model is inspired by the previous empirical studies of distributed graph algorithms [8, 30] using MapReduce and a distributed hash table service [17]. This extension allows us to give new graph algorithms with much lower round complexities compared to the best-known solutions in the MPC model. In particular, in the AMPC model we show how to solve maximal independent set in O (1) rounds and connectivity/minimum spanning tree in O (log log m / n n rounds both using O ( n δ ) space per machine for constant δ < 1. In the same memory regime for MPC, the best-known algorithms for these problems require poly log n rounds. Our results imply that the 2-C YCLE conjecture, which is widely believed to hold in the MPC model, does not hold in the AMPC model.


Author(s):  
Theresa Csar ◽  
Martin Lackner ◽  
Reinhard Pichler

The Schulze method is a voting rule widely used in practice and enjoys many positive axiomatic properties. While it is computable in polynomial time, its straight-forward implementation does not scale well for large elections. In this paper, we develop a highly optimised algorithm for computing the Schulze method with Pregel, a framework for massively parallel computation of graph problems, and demonstrate its applicability for large preference data sets. In addition, our theoretic analysis shows that the Schulze method is indeed particularly well-suited for parallel computation, in stark contrast to the related ranked pairs method. More precisely we show that winner determination subject to the Schulze method is NL-complete, whereas this problem is P-complete for the ranked pairs method.


Sign in / Sign up

Export Citation Format

Share Document