scholarly journals Rapid mixing of the switch Markov chain for strongly stable degree sequences

2020 ◽  
Vol 57 (3) ◽  
pp. 637-657
Author(s):  
Georgios Amanatidis ◽  
Pieter Kleer
2021 ◽  
Vol 291 ◽  
pp. 143-162
Author(s):  
Pu Gao ◽  
Catherine Greenhill

10.37236/9652 ◽  
2021 ◽  
Vol 28 (3) ◽  
Author(s):  
Péter L. Erdős ◽  
Ervin Győri ◽  
Tamás Róbert Mezei ◽  
István Miklós ◽  
Dániel Soltész

One of the simplest methods of generating a random graph with a given degree sequence is provided by the Monte Carlo Markov Chain method using switches. The switch Markov chain converges to the uniform distribution, but generally the rate of convergence is not known. After a number of results concerning various degree sequences, rapid mixing was established for so-called P-stable degree sequences (including that of directed graphs), which covers every previously known rapidly mixing region of degree sequences. In this paper we give a non-trivial family of degree sequences that are not P-stable and the switch Markov chain is still rapidly mixing on them. This family has an intimate connection to Tyshkevich-decompositions and strong stability as well.


2013 ◽  
Vol 22 (3) ◽  
pp. 366-383 ◽  
Author(s):  
PÉTER L. ERDŐS ◽  
ZOLTÁN KIRÁLY ◽  
ISTVÁN MIKLÓS

One of the first graph-theoretical problems to be given serious attention (in the 1950s) was the decision whether a given integer sequence is equal to the degree sequence of a simple graph (orgraphical, for short). One method to solve this problem is the greedy algorithm of Havel and Hakimi, which is based on theswapoperation. Another, closely related question is to find a sequence of swap operations to transform one graphical realization into another of the same degree sequence. This latter problem has received particular attention in the context of rapidly mixing Markov chain approaches to uniform sampling of all possible realizations of a given degree sequence. (This becomes a matter of interest in the context of the study of large social networks, for example.) Previously there were only crude upper bounds on the shortest possible length of such swap sequences between two realizations. In this paper we develop formulae (Gallai-type identities) for theswap-distances of any two realizations of simple undirected or directed degree sequences. These identities considerably improve the known upper bounds on the swap-distances.


2017 ◽  
Vol 27 (2) ◽  
pp. 186-207
Author(s):  
PÉTER L. ERDŐS ◽  
ISTVÁN MIKLÓS ◽  
ZOLTÁN TOROCZKAI

In network modelling of complex systems one is often required to sample random realizations of networks that obey a given set of constraints, usually in the form of graph measures. A much studied class of problems targets uniform sampling of simple graphs with given degree sequence or also with given degree correlations expressed in the form of a Joint Degree Matrix. One approach is to use Markov chains based on edge switches (swaps) that preserve the constraints, are irreducible (ergodic) and fast mixing. In 1999, Kannan, Tetali and Vempala (KTV) proposed a simple swap Markov chain for sampling graphs with given degree sequence, and conjectured that it mixes rapidly (in polynomial time) for arbitrary degree sequences. Although the conjecture is still open, it has been proved for special degree sequences, in particular for those of undirected and directed regular simple graphs, half-regular bipartite graphs, and graphs with certain bounded maximum degrees. Here we prove the fast mixing KTV conjecture for novel, exponentially large classes of irregular degree sequences. Our method is based on a canonical decomposition of degree sequences into split graph degree sequences, a structural theorem for the space of graph realizations and on a factorization theorem for Markov chains. After introducing bipartite ‘splitted’ degree sequences, we also generalize the canonical split graph decomposition for bipartite and directed graphs.


2022 ◽  
Vol 36 (1) ◽  
pp. 118-146
Author(s):  
Georgios Amanatidis ◽  
Pieter Kleer
Keyword(s):  

2008 ◽  
Vol 19 (06) ◽  
pp. 1461-1477 ◽  
Author(s):  
MARKUS JALSENIUS ◽  
KASPER PEDERSEN

We study the mixing time of a systematic scan Markov chain for sampling from the uniform distribution on proper 7-colourings of a finite rectangular sub-grid of the infinite square lattice, the grid. A systematic scan Markov chain cycles through finite-size subsets of vertices in a deterministic order and updates the colours assigned to the vertices of each subset. The systematic scan Markov chain that we present cycles through subsets consisting of 2×2 sub-grids and updates the colours assigned to the vertices using a procedure known as heat-bath. We give a computer-assisted proof that this systematic scan Markov chain mixes in O( log n) scans, where n is the size of the rectangular sub-grid. We make use of a heuristic to compute required couplings of colourings of 2×2 sub-grids. This is the first time the mixing time of a systematic scan Markov chain on the grid has been shown to mix for less than 8 colours. We also give partial results that underline the challenges of proving rapid mixing of a systematic scan Markov chain for sampling 6-colourings of the grid by considering 2×3 and 3×3 sub-grids.


10.37236/721 ◽  
2011 ◽  
Vol 18 (1) ◽  
Author(s):  
Catherine Greenhill

The switch chain is a well-known Markov chain for sampling directed graphs with a given degree sequence. While not ergodic in general, we show that it is ergodic for regular degree sequences. We then prove that the switch chain is rapidly mixing for regular directed graphs of degree $d$, where $d$ is any positive integer-valued function of the number of vertices. We bound the mixing time by bounding the eigenvalues of the chain. A new result is presented and applied to bound the smallest (most negative) eigenvalue. This result is a modification of a lemma by Diaconis and Stroock [Annals of Applied Probability 1991], and by using it we avoid working with a lazy chain. A multicommodity flow argument is used to bound the second-largest eigenvalue of the chain. This argument is based on the analysis of a related Markov chain for undirected regular graphs by Cooper, Dyer and Greenhill [Combinatorics, Probability and Computing 2007], but with significant extension required.


2019 ◽  
Vol 62 (3) ◽  
pp. 577-586 ◽  
Author(s):  
Garnett P. McMillan ◽  
John B. Cannon

Purpose This article presents a basic exploration of Bayesian inference to inform researchers unfamiliar to this type of analysis of the many advantages this readily available approach provides. Method First, we demonstrate the development of Bayes' theorem, the cornerstone of Bayesian statistics, into an iterative process of updating priors. Working with a few assumptions, including normalcy and conjugacy of prior distribution, we express how one would calculate the posterior distribution using the prior distribution and the likelihood of the parameter. Next, we move to an example in auditory research by considering the effect of sound therapy for reducing the perceived loudness of tinnitus. In this case, as well as most real-world settings, we turn to Markov chain simulations because the assumptions allowing for easy calculations no longer hold. Using Markov chain Monte Carlo methods, we can illustrate several analysis solutions given by a straightforward Bayesian approach. Conclusion Bayesian methods are widely applicable and can help scientists overcome analysis problems, including how to include existing information, run interim analysis, achieve consensus through measurement, and, most importantly, interpret results correctly. Supplemental Material https://doi.org/10.23641/asha.7822592


Sign in / Sign up

Export Citation Format

Share Document