scholarly journals The Mixing of Markov Chains on Linear Extensions in Practice

Author(s):  
Topi Talvitie ◽  
Teppo Niinimäki ◽  
Mikko Koivisto

We investigate almost uniform sampling from the set of linear extensions of a given partial order. The most efficient schemes stem from Markov chains whose mixing time bounds are polynomial, yet impractically large. We show that, on instances one encounters in practice, the actual mixing times can be much smaller than the worst-case bounds, and particularly so for a novel Markov chain we put forward. We circumvent the inherent hardness of estimating standard mixing times by introducing a refined notion, which admits estimation for moderate-size partial orders. Our empirical results suggest that the Markov chain approach to sample linear extensions can be made to scale well in practice, provided that the actual mixing times can be realized by instance-sensitive upper bounds or termination rules. Examples of the latter include existing perfect simulation algorithms, whose running times in our experiments follow the actual mixing times of certain chains, albeit with significant overhead.

2013 ◽  
Vol 30 (01) ◽  
pp. 1250045 ◽  
Author(s):  
JEFFREY J. HUNTER

The distribution of the "mixing time" or the "time to stationarity" in a discrete time irreducible Markov chain, starting in state i, can be defined as the number of trials to reach a state sampled from the stationary distribution of the Markov chain. Expressions for the probability generating function, and hence the probability distribution of the mixing time, starting in state i, are derived and special cases explored. This extends the results of the author regarding the expected time to mixing [Hunter, JJ (2006). Mixing times with applications to perturbed Markov chains. Linear Algebra and Its Applications, 417, 108–123] and the variance of the times to mixing, [Hunter, JJ (2008). Variances of first passage times in a Markov chain with applications to mixing times. Linear Algebra and Its Applications, 429, 1135–1162]. Some new results for the distribution of the recurrence and the first passage times in a general irreducible three-state Markov chain are also presented.


2011 ◽  
Vol DMTCS Proceedings vol. AO,... (Proceedings) ◽  
Author(s):  
Christine E. Heitsch ◽  
Prasad Tetali

International audience We consider a Markov chain Monte Carlo approach to the uniform sampling of meanders. Combinatorially, a meander $M = [A:B]$ is formed by two noncrossing perfect matchings, above $A$ and below $B$ the same endpoints, which form a single closed loop. We prove that meanders are connected under appropriate pairs of balanced local moves, one operating on $A$ and the other on $B$. We also prove that the subset of meanders with a fixed $B$ is connected under a suitable local move operating on an appropriately defined meandric triple in $A$. We provide diameter bounds under such moves, tight up to a (worst case) factor of two. The mixing times of the Markov chains remain open. Nous considérons une approche de Monte Carlo par chaîne de Markov pour l'échantillonnage uniforme des méandres. Combinatoirement, un méandre $M = [A : B]$ est constitué par deux couplages (matchings) parfaits sans intersection $A$ et $B$, définis sur le même ensemble de points alignés, et qui forment une boucle fermée simple lorsqu'on dessine $A$ "vers le haut'' et $B$ "vers le bas''. Nous montrons que les méandres sont connectés sous l'action de paires appropriées de mouvements locaux équilibrés, l'un opérant sur $A$ et l'autre sur $B$. Nous montrons également que le sous-ensemble de méandres avec un $B$ fixe est connecté sous l'action de mouvements locaux définis sur des "triplets méandriques'' de $A$. Nous fournissons des bornes sur les diamètres pour de tels mouvements, exactes à un facteur 2 près (dans le pire des cas). Les temps de mélange des chaînes de Markov demeurent une question ouverte.


1996 ◽  
Vol 33 (02) ◽  
pp. 357-367 ◽  
Author(s):  
M. V. Koutras

In this paper we consider a class of reliability structures which can be efficiently described through (imbedded in) finite Markov chains. Some general results are provided for the reliability evaluation and generating functions of such systems. Finally, it is shown that a great variety of well known reliability structures can be accommodated in this general framework, and certain properties of those structures are obtained on using their Markov chain imbedding description.


10.37236/3028 ◽  
2013 ◽  
Vol 20 (1) ◽  
Author(s):  
István Miklós ◽  
Péter L Erdős ◽  
Lajos Soukup

In this paper we consider a simple Markov chain for bipartite graphs with given degree sequence on $n$ vertices. We show that the mixing time of this Markov chain is bounded above by a polynomial in $n$ in case of half-regular degree sequence. The novelty of our approach lies in the construction of the multicommodity flow in Sinclair's method.


1996 ◽  
Vol 33 (2) ◽  
pp. 357-367 ◽  
Author(s):  
M. V. Koutras

In this paper we consider a class of reliability structures which can be efficiently described through (imbedded in) finite Markov chains. Some general results are provided for the reliability evaluation and generating functions of such systems. Finally, it is shown that a great variety of well known reliability structures can be accommodated in this general framework, and certain properties of those structures are obtained on using their Markov chain imbedding description.


2020 ◽  
Vol 24 ◽  
pp. 138-147 ◽  
Author(s):  
Andressa Cerqueira ◽  
Aurélien Garivier ◽  
Florencia Leonardi

In this paper, we propose a perfect simulation algorithm for the Exponential Random Graph Model, based on the Coupling from the past method of Propp and Wilson (1996). We use a Glauber dynamics to construct the Markov Chain and we prove the monotonicity of the ERGM for a subset of the parametric space. We also obtain an upper bound on the running time of the algorithm that depends on the mixing time of the Markov chain.


10.37236/9503 ◽  
2020 ◽  
Vol 27 (4) ◽  
Author(s):  
Pieter Kleer ◽  
Viresh Patel ◽  
Fabian Stroh

We consider the irreducibility of switch-based Markov chains for the approximate uniform sampling of Hamiltonian cycles in a given undirected dense graph on $n$ vertices. As our main result, we show that every pair of Hamiltonian cycles in a graph with minimum degree at least $n/2+7$ can be transformed into each other by switch operations of size at most 10, implying that the switch Markov chain using switches of size at most 10 is irreducible. As a proof of concept, we also show that this Markov chain is rapidly mixing on dense monotone graphs.


2006 ◽  
Vol Vol. 8 ◽  
Author(s):  
R. Balasubramanian ◽  
C.R. Subramanian

International audience We study the problem of efficiently sampling k-colorings of bipartite graphs. We show that a class of markov chains cannot be used as efficient samplers. Precisely, we show that, for any k, 6 ≤ k ≤ n^\1/3-ε \, ε > 0 fixed, \emphalmost every bipartite graph on n+n vertices is such that the mixing time of any markov chain asymptotically uniform on its k-colorings is exponential in n/k^2 (if it is allowed to only change the colors of O(n/k) vertices in a single transition step). This kind of exponential time mixing is called \emphtorpid mixing. As a corollary, we show that there are (for every n) bipartite graphs on 2n vertices with Δ (G) = Ω (\ln n) such that for every k, 6 ≤ k ≤ Δ /(6 \ln Δ ), each member of a large class of chains mixes torpidly. While, for fixed k, such negative results are implied by the work of CDF, our results are more general in that they allow k to grow with n. We also show that these negative results hold true for H-colorings of bipartite graphs provided H contains a spanning complete bipartite subgraph. We also present explicit examples of colorings (k-colorings or H-colorings) which admit 1-cautious chains that are ergodic and are shown to have exponential mixing time. While, for fixed k or fixed H, such negative results are implied by the work of CDF, our results are more general in that they allow k or H to vary with n.


2017 ◽  
Vol Vol. 18 no. 3 (Graph Theory) ◽  
Author(s):  
Stefan Felsner ◽  
Daniel Heldt

We study Markov chains for $\alpha$-orientations of plane graphs, these are orientations where the outdegree of each vertex is prescribed by the value of a given function $\alpha$. The set of $\alpha$-orientations of a plane graph has a natural distributive lattice structure. The moves of the up-down Markov chain on this distributive lattice corresponds to reversals of directed facial cycles in the $\alpha$-orientation. We have a positive and several negative results regarding the mixing time of such Markov chains. A 2-orientation of a plane quadrangulation is an orientation where every inner vertex has outdegree 2. We show that there is a class of plane quadrangulations such that the up-down Markov chain on the 2-orientations of these quadrangulations is slowly mixing. On the other hand the chain is rapidly mixing on 2-orientations of quadrangulations with maximum degree at most 4. Regarding examples for slow mixing we also revisit the case of 3-orientations of triangulations which has been studied before by Miracle et al.. Our examples for slow mixing are simpler and have a smaller maximum degree, Finally we present the first example of a function $\alpha$ and a class of plane triangulations of constant maximum degree such that the up-down Markov chain on the $\alpha$-orientations of these graphs is slowly mixing.


2012 ◽  
Vol DMTCS Proceedings vol. AQ,... (Proceedings) ◽  
Author(s):  
Sarah Miracle ◽  
Dana Randall ◽  
Amanda Pascoe Streib ◽  
Prasad Tetali

International audience Given a planar triangulation, a 3-orientation is an orientation of the internal edges so all internal vertices have out-degree three. Each 3-orientation gives rise to a unique edge coloring known as a $\textit{Schnyder wood}$ that has proven useful for various computing and combinatorics applications. We consider natural Markov chains for sampling uniformly from the set of 3-orientations. First, we study a "triangle-reversing'' chain on the space of 3-orientations of a fixed triangulation that reverses the orientation of the edges around a triangle in each move. We show that (i) when restricted to planar triangulations of maximum degree six, the Markov chain is rapidly mixing, and (ii) there exists a triangulation with high degree on which this Markov chain mixes slowly. Next, we consider an "edge-flipping'' chain on the larger state space consisting of 3-orientations of all planar triangulations on a fixed number of vertices. It was also shown previously that this chain connects the state space and we prove that the chain is always rapidly mixing.


Sign in / Sign up

Export Citation Format

Share Document