Empirical convergence rates for continuous-time Markov chains

2001 ◽  
Vol 38 (1) ◽  
pp. 262-269 ◽  
Author(s):  
Geoffrey Pritchard ◽  
David J. Scott

We consider the problem of estimating the rate of convergence to stationarity of a continuous-time, finite-state Markov chain. This is done via an estimator of the second-largest eigenvalue of the transition matrix, which in turn is based on conventional inference in a parametric model. We obtain a limiting distribution for the eigenvalue estimator. As an example we treat an M/M/c/c queue, and show that the method allows us to estimate the time to stationarity τ within a time comparable to τ.

2001 ◽  
Vol 38 (01) ◽  
pp. 262-269 ◽  
Author(s):  
Geoffrey Pritchard ◽  
David J. Scott

We consider the problem of estimating the rate of convergence to stationarity of a continuous-time, finite-state Markov chain. This is done via an estimator of the second-largest eigenvalue of the transition matrix, which in turn is based on conventional inference in a parametric model. We obtain a limiting distribution for the eigenvalue estimator. As an example we treat an M/M/c/c queue, and show that the method allows us to estimate the time to stationarity τ within a time comparable to τ.


1968 ◽  
Vol 5 (03) ◽  
pp. 669-678 ◽  
Author(s):  
Jozef L. Teugels

A general proposition is proved stating that the exponential ergodicity of a stationary Markov chain is preserved for derived Markov chains as defined by Cohen [2], [3]. An application to a certain type of continuous time Markov chains is included.


1993 ◽  
Vol 30 (3) ◽  
pp. 518-528 ◽  
Author(s):  
Frank Ball ◽  
Geoffrey F. Yeo

We consider lumpability for continuous-time Markov chains and provide a simple probabilistic proof of necessary and sufficient conditions for strong lumpability, valid in circumstances not covered by known theory. We also consider the following marginalisability problem. Let {X{t)} = {(X1(t), X2(t), · ··, Xm(t))} be a continuous-time Markov chain. Under what conditions are the marginal processes {X1(t)}, {X2(t)}, · ··, {Xm(t)} also continuous-time Markov chains? We show that this is related to lumpability and, if no two of the marginal processes can jump simultaneously, then they are continuous-time Markov chains if and only if they are mutually independent. Applications to ion channel modelling and birth–death processes are discussed briefly.


2003 ◽  
Vol 40 (04) ◽  
pp. 970-979 ◽  
Author(s):  
A. Yu. Mitrophanov

For finite, homogeneous, continuous-time Markov chains having a unique stationary distribution, we derive perturbation bounds which demonstrate the connection between the sensitivity to perturbations and the rate of exponential convergence to stationarity. Our perturbation bounds substantially improve upon the known results. We also discuss convergence bounds for chains with diagonalizable generators and investigate the relationship between the rate of convergence and the sensitivity of the eigenvalues of the generator; special attention is given to reversible chains.


1988 ◽  
Vol 25 (1) ◽  
pp. 34-42 ◽  
Author(s):  
Jean Johnson ◽  
Dean Isaacson

Sufficient conditions for strong ergodicity of discrete-time non-homogeneous Markov chains have been given in several papers. Conditions have been given using the left eigenvectors ψn of Pn(ψ nPn = ψ n) and also using the limiting behavior of Pn. In this paper we consider the analogous results in the case of continuous-time Markov chains where one uses the intensity matrices Q(t) instead of P(s, t). A bound on the rate of convergence of certain strongly ergodic chains is also given.


1989 ◽  
Vol 26 (3) ◽  
pp. 643-648 ◽  
Author(s):  
A. I. Zeifman

We consider a non-homogeneous continuous-time Markov chain X(t) with countable state space. Definitions of uniform and strong quasi-ergodicity are introduced. The forward Kolmogorov system for X(t) is considered as a differential equation in the space of sequences l1. Sufficient conditions for uniform quasi-ergodicity are deduced from this equation. We consider conditions of uniform and strong ergodicity in the case of proportional intensities.


1988 ◽  
Vol 2 (2) ◽  
pp. 267-268
Author(s):  
Sheldon M. Ross

In [1] an approach to approximate the transition probabilities and mean occupation times of a continuous-time Markov chain is presented. For the chain under consideration, let Pij(t) and Tij(t) denote respectively the probability that it is in state j at time t, and the total time spent in j by time t, in both cases conditional on the chain starting in state i. Also, let Y1,…, Yn be independent exponential random variables each with rate λ = n/t, which are also independent of the Markov chain.


2009 ◽  
Vol 46 (02) ◽  
pp. 497-506 ◽  
Author(s):  
V. B. Yap

In a homogeneous continuous-time Markov chain on a finite state space, two states that jump to every other state with the same rate are called similar. By partitioning states into similarity classes, the algebraic derivation of the transition matrix can be simplified, using hidden holding times and lumped Markov chains. When the rate matrix is reversible, the transition matrix is explicitly related in an intuitive way to that of the lumped chain. The theory provides a unified derivation for a whole range of useful DNA base substitution models, and a number of amino acid substitution models.


Sign in / Sign up

Export Citation Format

Share Document