Monotonicity of Positive Dependence with Time for Stationary Reversible Markov Chains

1995 ◽  
Vol 9 (2) ◽  
pp. 227-237 ◽  
Author(s):  
Taizhong Hu ◽  
Harry Joe

Let (X1, X2) and (Y1, Y2) be bivariate random vectors with a common marginal distribution (X1, X2) is said to be more positively dependent than (Y1, Y2) if E[h(X1)h(X2)] ≥ E[h(Y1)h(Y2)] for all functions h for which the expectations exist. The purpose of this paper is to study the monotonicity of positive dependence with time for a stationary reversible Markov chain [X1]; that is, (Xs, Xl+s) is less positively dependent as t increases. Both discrete and continuous time and both a denumerable set and a subset of the real line for the state space are considered. Some examples are given to show that the assertions established for reversible Markov chains are not true for nonreversible chains.

1989 ◽  
Vol 26 (3) ◽  
pp. 643-648 ◽  
Author(s):  
A. I. Zeifman

We consider a non-homogeneous continuous-time Markov chain X(t) with countable state space. Definitions of uniform and strong quasi-ergodicity are introduced. The forward Kolmogorov system for X(t) is considered as a differential equation in the space of sequences l1. Sufficient conditions for uniform quasi-ergodicity are deduced from this equation. We consider conditions of uniform and strong ergodicity in the case of proportional intensities.


1983 ◽  
Vol 20 (03) ◽  
pp. 505-512
Author(s):  
Russell Gerrard

The classical condition for regularity of a Markov chain is extended to include semi-Markov chains. In addition, for any given semi-Markov chain, we find Markov chains which exhibit identical regularity properties. This is done either (i) by transforming the state space or, alternatively, (ii) by imposing conditions on the holding-time distributions. Brief consideration is given to the problem of extending the results to processes other than semi-Markov chains.


1983 ◽  
Vol 20 (3) ◽  
pp. 505-512 ◽  
Author(s):  
Russell Gerrard

The classical condition for regularity of a Markov chain is extended to include semi-Markov chains. In addition, for any given semi-Markov chain, we find Markov chains which exhibit identical regularity properties. This is done either (i) by transforming the state space or, alternatively, (ii) by imposing conditions on the holding-time distributions. Brief consideration is given to the problem of extending the results to processes other than semi-Markov chains.


2021 ◽  
Vol 9 ◽  
Author(s):  
Werner Krauth

This review treats the mathematical and algorithmic foundations of non-reversible Markov chains in the context of event-chain Monte Carlo (ECMC), a continuous-time lifted Markov chain that employs the factorized Metropolis algorithm. It analyzes a number of model applications and then reviews the formulation as well as the performance of ECMC in key models in statistical physics. Finally, the review reports on an ongoing initiative to apply ECMC to the sampling problem in molecular simulation, i.e., to real-world models of peptides, proteins, and polymers in aqueous solution.


2022 ◽  
pp. 1-47
Author(s):  
Amarjit Budhiraja ◽  
Nicolas Fraiman ◽  
Adam Waterbury

Abstract We consider a collection of Markov chains that model the evolution of multitype biological populations. The state space of the chains is the positive orthant, and the boundary of the orthant is the absorbing state for the Markov chain and represents the extinction states of different population types. We are interested in the long-term behavior of the Markov chain away from extinction, under a small noise scaling. Under this scaling, the trajectory of the Markov process over any compact interval converges in distribution to the solution of an ordinary differential equation (ODE) evolving in the positive orthant. We study the asymptotic behavior of the quasi-stationary distributions (QSD) in this scaling regime. Our main result shows that, under conditions, the limit points of the QSD are supported on the union of interior attractors of the flow determined by the ODE. We also give lower bounds on expected extinction times which scale exponentially with the system size. Results of this type when the deterministic dynamical system obtained under the scaling limit is given by a discrete-time evolution equation and the dynamics are essentially in a compact space (namely, the one-step map is a bounded function) have been studied by Faure and Schreiber (2014). Our results extend these to a setting of an unbounded state space and continuous-time dynamics. The proofs rely on uniform large deviation results for small noise stochastic dynamical systems and methods from the theory of continuous-time dynamical systems. In general, QSD for Markov chains with absorbing states and unbounded state spaces may not exist. We study one basic family of binomial-Poisson models in the positive orthant where one can use Lyapunov function methods to establish existence of QSD and also to argue the tightness of the QSD of the scaled sequence of Markov chains. The results from the first part are then used to characterize the support of limit points of this sequence of QSD.


1989 ◽  
Vol 26 (03) ◽  
pp. 643-648 ◽  
Author(s):  
A. I. Zeifman

We consider a non-homogeneous continuous-time Markov chain X(t) with countable state space. Definitions of uniform and strong quasi-ergodicity are introduced. The forward Kolmogorov system for X(t) is considered as a differential equation in the space of sequences l 1 . Sufficient conditions for uniform quasi-ergodicity are deduced from this equation. We consider conditions of uniform and strong ergodicity in the case of proportional intensities.


1988 ◽  
Vol 25 (02) ◽  
pp. 279-290 ◽  
Author(s):  
Masaaki Kijima

Let X(t) be a temporally homogeneous irreducible Markov chain in continuous time defined on . For k < i < j, let H = {k + 1, ···, j − 1} and let kTij ( jTik ) be the upward (downward) conditional first-passage time of X(t) from i to j(k) given no visit to . These conditional passage times are studied through first-passage times of a modified chain HX(t) constructed by making the set of states absorbing. It will be shown that the densities of kTij and jTik for any birth-death process are unimodal and the modes kmij ( jmik ) of the unimodal densities are non-increasing (non-decreasing) with respect to i. Some distribution properties of kTij and jTik for a time-reversible Markov chain are presented. Symmetry among kTij, jTik , and is also discussed, where , and are conditional passage times of the reversed process of X(t).


1979 ◽  
Vol 16 (01) ◽  
pp. 226-229 ◽  
Author(s):  
P. Suomela

An explicit formula for an invariant measure of a time-reversible Markov chain is presented. It is based on a characterization of time reversibility in terms of the transition probabilities alone.


Author(s):  
Florence Merlevède ◽  
Magda Peligrad ◽  
Sergey Utev

This chapter is dedicated to the Gaussian approximation of a reversible Markov chain. Regarding this problem, the coefficients of dependence for reversible Markov chains are actually the covariances between the variables. We present here the traditional form of the martingale approximation including forward and backward martingale approximations. Special attention is given to maximal inequalities which are building blocks for the functional limit theorems. When the covariances are summable we present the functional central limit theorem under the standard normalization √n. When the variance of the partial sums are regularly varying with n, we present the functional CLT using as normalization the standard deviation of partial sums. Applications are given to the Metropolis–Hastings algorithm.


1979 ◽  
Vol 16 (1) ◽  
pp. 226-229 ◽  
Author(s):  
P. Suomela

An explicit formula for an invariant measure of a time-reversible Markov chain is presented. It is based on a characterization of time reversibility in terms of the transition probabilities alone.


Sign in / Sign up

Export Citation Format

Share Document