scholarly journals Mixing times of Markov chains on 3-Orientations of Planar Triangulations

2012 ◽  
Vol DMTCS Proceedings vol. AQ,... (Proceedings) ◽  
Author(s):  
Sarah Miracle ◽  
Dana Randall ◽  
Amanda Pascoe Streib ◽  
Prasad Tetali

International audience Given a planar triangulation, a 3-orientation is an orientation of the internal edges so all internal vertices have out-degree three. Each 3-orientation gives rise to a unique edge coloring known as a $\textit{Schnyder wood}$ that has proven useful for various computing and combinatorics applications. We consider natural Markov chains for sampling uniformly from the set of 3-orientations. First, we study a "triangle-reversing'' chain on the space of 3-orientations of a fixed triangulation that reverses the orientation of the edges around a triangle in each move. We show that (i) when restricted to planar triangulations of maximum degree six, the Markov chain is rapidly mixing, and (ii) there exists a triangulation with high degree on which this Markov chain mixes slowly. Next, we consider an "edge-flipping'' chain on the larger state space consisting of 3-orientations of all planar triangulations on a fixed number of vertices. It was also shown previously that this chain connects the state space and we prove that the chain is always rapidly mixing.

2010 ◽  
Vol DMTCS Proceedings vol. AM,... (Proceedings) ◽  
Author(s):  
Lucas Gerin

International audience We build and analyze in this paper Markov chains for the random sampling of some one-dimensional lattice paths with constraints, for various constraints. These chains are easy to implement, and sample an "almost" uniform path of length $n$ in $n^{3+\epsilon}$ steps. This bound makes use of a certain $\textit{contraction property}$ of the Markov chain, and is proved with an approach inspired by optimal transport.


1976 ◽  
Vol 8 (04) ◽  
pp. 737-771 ◽  
Author(s):  
R. L. Tweedie

The aim of this paper is to present a comprehensive set of criteria for classifying as recurrent, transient, null or positive the sets visited by a general state space Markov chain. When the chain is irreducible in some sense, these then provide criteria for classifying the chain itself, provided the sets considered actually reflect the status of the chain as a whole. The first part of the paper is concerned with the connections between various definitions of recurrence, transience, nullity and positivity for sets and for irreducible chains; here we also elaborate the idea of status sets for irreducible chains. In the second part we give our criteria for classifying sets. When the state space is countable, our results for recurrence, transience and positivity reduce to the classical work of Foster (1953); for continuous-valued chains they extend results of Lamperti (1960), (1963); for general spaces the positivity and recurrence criteria strengthen those of Tweedie (1975b).


1985 ◽  
Vol 22 (01) ◽  
pp. 138-147 ◽  
Author(s):  
Wojciech Szpankowski

Some sufficient conditions for non-ergodicity are given for a Markov chain with denumerable state space. These conditions generalize Foster's results, in that unbounded Lyapunov functions are considered. Our criteria directly extend the conditions obtained in Kaplan (1979), in the sense that a class of Lyapunov functions is studied. Applications are presented through some examples; in particular, sufficient conditions for non-ergodicity of a multidimensional Markov chain are given.


1990 ◽  
Vol 4 (1) ◽  
pp. 89-116 ◽  
Author(s):  
Ushlo Sumita ◽  
Maria Rieders

A novel algorithm is developed which computes the ergodic probability vector for large Markov chains. Decomposing the state space into lumps, the algorithm generates a replacement process on each lump, where any exit from a lump is instantaneously replaced at some state in that lump. The replacement distributions are constructed recursively in such a way that, in the limit, the ergodic probability vector for a replacement process on one lump will be proportional to the ergodic probability vector of the original Markov chain restricted to that lump. Inverse matrices computed in the algorithm are of size (M – 1), where M is the number of lumps, thereby providing a substantial rank reduction. When a special structure is present, the procedure for generating the replacement distributions can be simplified. The relevance of the new algorithm to the aggregation-disaggregation algorithm of Takahashi [29] is also discussed.


1985 ◽  
Vol 22 (1) ◽  
pp. 138-147 ◽  
Author(s):  
Wojciech Szpankowski

Some sufficient conditions for non-ergodicity are given for a Markov chain with denumerable state space. These conditions generalize Foster's results, in that unbounded Lyapunov functions are considered. Our criteria directly extend the conditions obtained in Kaplan (1979), in the sense that a class of Lyapunov functions is studied. Applications are presented through some examples; in particular, sufficient conditions for non-ergodicity of a multidimensional Markov chain are given.


1984 ◽  
Vol 21 (03) ◽  
pp. 567-574 ◽  
Author(s):  
Atef M. Abdel-Moneim ◽  
Frederick W. Leysieffer

Conditions under which a function of a finite, discrete-time Markov chain, X(t), is again Markov are given, when X(t) is not irreducible. These conditions are given in terms of an interrelationship between two partitions of the state space of X(t), the partition induced by the minimal essential classes of X(t) and the partition with respect to which lumping is to be considered.


1982 ◽  
Vol 19 (02) ◽  
pp. 272-288 ◽  
Author(s):  
P. J. Brockwell ◽  
S. I. Resnick ◽  
N. Pacheco-Santiago

A study is made of the maximum, minimum and range on [0,t] of the integral processwhereSis a finite state-space Markov chain. Approximate results are derived by establishing weak convergence of a sequence of such processes to a Wiener process. For a particular family of two-state stationary Markov chains we show that the corresponding centered integral processes exhibit the Hurst phenomenon to a remarkable degree in their pre-asymptotic behaviour.


2000 ◽  
Vol 37 (03) ◽  
pp. 795-806 ◽  
Author(s):  
Laurent Truffet

We propose in this paper two methods to compute Markovian bounds for monotone functions of a discrete time homogeneous Markov chain evolving in a totally ordered state space. The main interest of such methods is to propose algorithms to simplify analysis of transient characteristics such as the output process of a queue, or sojourn time in a subset of states. Construction of bounds are based on two kinds of results: well-known results on stochastic comparison between Markov chains with the same state space; and the fact that in some cases a function of Markov chain is again a homogeneous Markov chain but with smaller state space. Indeed, computation of bounds uses knowledge on the whole initial model. However, only part of this data is necessary at each step of the algorithms.


1974 ◽  
Vol 11 (4) ◽  
pp. 726-741 ◽  
Author(s):  
Richard. L. Tweedie

The quasi-stationary behaviour of a Markov chain which is φ-irreducible when restricted to a subspace of a general state space is investigated. It is shown that previous work on the case where the subspace is finite or countably infinite can be extended to general chains, and the existence of certain quasi-stationary limits as honest distributions is equivalent to the restricted chain being R-positive with the unique R-invariant measure satisfying a certain finiteness condition.


1981 ◽  
Vol 18 (1) ◽  
pp. 112-121 ◽  
Author(s):  
Zvi Rosberg

For an aperiodic, irreducible Markov chain with the non-negative integers as state space, a criterion for ergodicity is given. This criterion generalizes the criteria appearing in Foster (1953), Pakes (1969) and Marlin (1973), in the sense that any test function (Liapunov function) which satisfies their conditions also satisfies ours. Applications are presented through some examples.


Sign in / Sign up

Export Citation Format

Share Document