Geometric Approaches to the Estimation of the Spectral Gap of Reversible Markov Chains

1993 ◽  
Vol 2 (3) ◽  
pp. 301-323 ◽  
Author(s):  
Salvatore Ingrassia

In this paper we consider the problem of estimating the spectral gap of a reversible Markov chain in terms of geometric quantities associated with the underlying graph. This quantity provides a bound on the rate of convergence of a Markov chain towards its stationary distribution. We give a critical and systematic treatment of this subject, summarizing and comparing the results of the two main approaches in the literature, algebraic and functional. The usefulness and drawbacks of these bounds are also discussed here.

1979 ◽  
Vol 16 (01) ◽  
pp. 226-229 ◽  
Author(s):  
P. Suomela

An explicit formula for an invariant measure of a time-reversible Markov chain is presented. It is based on a characterization of time reversibility in terms of the transition probabilities alone.


Author(s):  
Florence Merlevède ◽  
Magda Peligrad ◽  
Sergey Utev

This chapter is dedicated to the Gaussian approximation of a reversible Markov chain. Regarding this problem, the coefficients of dependence for reversible Markov chains are actually the covariances between the variables. We present here the traditional form of the martingale approximation including forward and backward martingale approximations. Special attention is given to maximal inequalities which are building blocks for the functional limit theorems. When the covariances are summable we present the functional central limit theorem under the standard normalization √n. When the variance of the partial sums are regularly varying with n, we present the functional CLT using as normalization the standard deviation of partial sums. Applications are given to the Metropolis–Hastings algorithm.


1979 ◽  
Vol 16 (1) ◽  
pp. 226-229 ◽  
Author(s):  
P. Suomela

An explicit formula for an invariant measure of a time-reversible Markov chain is presented. It is based on a characterization of time reversibility in terms of the transition probabilities alone.


1995 ◽  
Vol 9 (2) ◽  
pp. 227-237 ◽  
Author(s):  
Taizhong Hu ◽  
Harry Joe

Let (X1, X2) and (Y1, Y2) be bivariate random vectors with a common marginal distribution (X1, X2) is said to be more positively dependent than (Y1, Y2) if E[h(X1)h(X2)] ≥ E[h(Y1)h(Y2)] for all functions h for which the expectations exist. The purpose of this paper is to study the monotonicity of positive dependence with time for a stationary reversible Markov chain [X1]; that is, (Xs, Xl+s) is less positively dependent as t increases. Both discrete and continuous time and both a denumerable set and a subset of the real line for the state space are considered. Some examples are given to show that the assertions established for reversible Markov chains are not true for nonreversible chains.


1968 ◽  
Vol 5 (2) ◽  
pp. 401-413 ◽  
Author(s):  
Paul J. Schweitzer

A perturbation formalism is presented which shows how the stationary distribution and fundamental matrix of a Markov chain containing a single irreducible set of states change as the transition probabilities vary. Expressions are given for the partial derivatives of the stationary distribution and fundamental matrix with respect to the transition probabilities. Semi-group properties of the generators of transformations from one Markov chain to another are investigated. It is shown that a perturbation formalism exists in the multiple subchain case if and only if the change in the transition probabilities does not alter the number of, or intermix the various subchains. The formalism is presented when this condition is satisfied.


1994 ◽  
Vol 26 (3) ◽  
pp. 756-774 ◽  
Author(s):  
Dimitris N. Politis

A generalization of the notion of a stationary Markov chain in more than one dimension is proposed, and is found to be a special class of homogeneous Markov random fields. Stationary Markov chains in many dimensions are shown to possess a maximum entropy property, analogous to the corresponding property for Markov chains in one dimension. In addition, a representation of Markov chains in many dimensions is provided, together with a method for their generation that converges to their stationary distribution.


1983 ◽  
Vol 20 (01) ◽  
pp. 191-196 ◽  
Author(s):  
R. L. Tweedie

We give conditions under which the stationary distribution π of a Markov chain admits moments of the general form ∫ f(x)π(dx), where f is a general function; specific examples include f(x) = xr and f(x) = esx . In general the time-dependent moments of the chain then converge to the stationary moments. We show that in special cases this convergence of moments occurs at a geometric rate. The results are applied to random walk on [0, ∞).


2003 ◽  
Vol 40 (04) ◽  
pp. 970-979 ◽  
Author(s):  
A. Yu. Mitrophanov

For finite, homogeneous, continuous-time Markov chains having a unique stationary distribution, we derive perturbation bounds which demonstrate the connection between the sensitivity to perturbations and the rate of exponential convergence to stationarity. Our perturbation bounds substantially improve upon the known results. We also discuss convergence bounds for chains with diagonalizable generators and investigate the relationship between the rate of convergence and the sensitivity of the eigenvalues of the generator; special attention is given to reversible chains.


2017 ◽  
Vol 114 (11) ◽  
pp. 2860-2864 ◽  
Author(s):  
Maria Chikina ◽  
Alan Frieze ◽  
Wesley Pegden

We present a statistical test to detect that a presented state of a reversible Markov chain was not chosen from a stationary distribution. In particular, given a value function for the states of the Markov chain, we would like to show rigorously that the presented state is an outlier with respect to the values, by establishing a p value under the null hypothesis that it was chosen from a stationary distribution of the chain. A simple heuristic used in practice is to sample ranks of states from long random trajectories on the Markov chain and compare these with the rank of the presented state; if the presented state is a 0.1% outlier compared with the sampled ranks (its rank is in the bottom 0.1% of sampled ranks), then this observation should correspond to a p value of 0.001. This significance is not rigorous, however, without good bounds on the mixing time of the Markov chain. Our test is the following: Given the presented state in the Markov chain, take a random walk from the presented state for any number of steps. We prove that observing that the presented state is an ε-outlier on the walk is significant at p=2ε under the null hypothesis that the state was chosen from a stationary distribution. We assume nothing about the Markov chain beyond reversibility and show that significance at p≈ε is best possible in general. We illustrate the use of our test with a potential application to the rigorous detection of gerrymandering in Congressional districting.


Sign in / Sign up

Export Citation Format

Share Document