scholarly journals Switch-Based Markov Chains for Sampling Hamiltonian Cycles in Dense Graphs

10.37236/9503 ◽  
2020 ◽  
Vol 27 (4) ◽  
Author(s):  
Pieter Kleer ◽  
Viresh Patel ◽  
Fabian Stroh

We consider the irreducibility of switch-based Markov chains for the approximate uniform sampling of Hamiltonian cycles in a given undirected dense graph on $n$ vertices. As our main result, we show that every pair of Hamiltonian cycles in a graph with minimum degree at least $n/2+7$ can be transformed into each other by switch operations of size at most 10, implying that the switch Markov chain using switches of size at most 10 is irreducible. As a proof of concept, we also show that this Markov chain is rapidly mixing on dense monotone graphs.

2011 ◽  
Vol 48 (04) ◽  
pp. 901-910 ◽  
Author(s):  
Vladimir Ejov ◽  
Nelly Litvak ◽  
Giang T. Nguyen ◽  
Peter G. Taylor

We prove the conjecture formulated in Litvak and Ejov (2009), namely, that the trace of the fundamental matrix of a singularly perturbed Markov chain that corresponds to a stochastic policy feasible for a given graph is minimised at policies corresponding to Hamiltonian cycles.


Author(s):  
Topi Talvitie ◽  
Teppo Niinimäki ◽  
Mikko Koivisto

We investigate almost uniform sampling from the set of linear extensions of a given partial order. The most efficient schemes stem from Markov chains whose mixing time bounds are polynomial, yet impractically large. We show that, on instances one encounters in practice, the actual mixing times can be much smaller than the worst-case bounds, and particularly so for a novel Markov chain we put forward. We circumvent the inherent hardness of estimating standard mixing times by introducing a refined notion, which admits estimation for moderate-size partial orders. Our empirical results suggest that the Markov chain approach to sample linear extensions can be made to scale well in practice, provided that the actual mixing times can be realized by instance-sensitive upper bounds or termination rules. Examples of the latter include existing perfect simulation algorithms, whose running times in our experiments follow the actual mixing times of certain chains, albeit with significant overhead.


2011 ◽  
Vol 48 (4) ◽  
pp. 901-910 ◽  
Author(s):  
Vladimir Ejov ◽  
Nelly Litvak ◽  
Giang T. Nguyen ◽  
Peter G. Taylor

We prove the conjecture formulated in Litvak and Ejov (2009), namely, that the trace of the fundamental matrix of a singularly perturbed Markov chain that corresponds to a stochastic policy feasible for a given graph is minimised at policies corresponding to Hamiltonian cycles.


1990 ◽  
Vol 27 (03) ◽  
pp. 545-556 ◽  
Author(s):  
S. Kalpazidou

The asymptotic behaviour of the sequence (𝒞 n (ω), wc,n (ω)/n), is studied where 𝒞 n (ω) is the class of all cycles c occurring along the trajectory ωof a recurrent strictly stationary Markov chain (ξ n ) until time n and wc,n (ω) is the number of occurrences of the cycle c until time n. The previous sequence of sample weighted classes converges almost surely to a class of directed weighted cycles (𝒞∞, ω c ) which represents uniquely the chain (ξ n ) as a circuit chain, and ω c is given a probabilistic interpretation.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nikolaos Halidias

Abstract In this note we study the probability and the mean time for absorption for discrete time Markov chains. In particular, we are interested in estimating the mean time for absorption when absorption is not certain and connect it with some other known results. Computing a suitable probability generating function, we are able to estimate the mean time for absorption when absorption is not certain giving some applications concerning the random walk. Furthermore, we investigate the probability for a Markov chain to reach a set A before reach B generalizing this result for a sequence of sets A 1 , A 2 , … , A k {A_{1},A_{2},\dots,A_{k}} .


2021 ◽  
Author(s):  
Andrea Marin ◽  
Carla Piazza ◽  
Sabina Rossi

AbstractIn this paper, we deal with the lumpability approach to cope with the state space explosion problem inherent to the computation of the stationary performance indices of large stochastic models. The lumpability method is based on a state aggregation technique and applies to Markov chains exhibiting some structural regularity. Moreover, it allows one to efficiently compute the exact values of the stationary performance indices when the model is actually lumpable. The notion of quasi-lumpability is based on the idea that a Markov chain can be altered by relatively small perturbations of the transition rates in such a way that the new resulting Markov chain is lumpable. In this case, only upper and lower bounds on the performance indices can be derived. Here, we introduce a novel notion of quasi-lumpability, named proportional lumpability, which extends the original definition of lumpability but, differently from the general definition of quasi-lumpability, it allows one to derive exact stationary performance indices for the original process. We then introduce the notion of proportional bisimilarity for the terms of the performance process algebra PEPA. Proportional bisimilarity induces a proportional lumpability on the underlying continuous-time Markov chains. Finally, we prove some compositionality results and show the applicability of our theory through examples.


2004 ◽  
Vol 2004 (8) ◽  
pp. 421-429 ◽  
Author(s):  
Souad Assoudou ◽  
Belkheir Essebbar

This note is concerned with Bayesian estimation of the transition probabilities of a binary Markov chain observed from heterogeneous individuals. The model is founded on the Jeffreys' prior which allows for transition probabilities to be correlated. The Bayesian estimator is approximated by means of Monte Carlo Markov chain (MCMC) techniques. The performance of the Bayesian estimates is illustrated by analyzing a small simulated data set.


Author(s):  
Peter L. Chesson

AbstractRandom transition probability matrices with stationary independent factors define “white noise” environment processes for Markov chains. Two examples are considered in detail. Such environment processes can be used to construct several Markov chains which are dependent, have the same transition probabilities and are jointly a Markov chain. Transition rates for such processes are evaluated. These results have application to the study of animal movements.


1981 ◽  
Vol 13 (2) ◽  
pp. 369-387 ◽  
Author(s):  
Richard D. Bourgin ◽  
Robert Cogburn

The general framework of a Markov chain in a random environment is presented and the problem of determining extinction probabilities is discussed. An efficient method for determining absorption probabilities and criteria for certain absorption are presented in the case that the environmental process is a two-state Markov chain. These results are then applied to birth and death, queueing and branching chains in random environments.


Sign in / Sign up

Export Citation Format

Share Document