scholarly journals Similar States in Continuous-Time Markov Chains

2009 ◽  
Vol 46 (2) ◽  
pp. 497-506 ◽  
Author(s):  
V. B. Yap

In a homogeneous continuous-time Markov chain on a finite state space, two states that jump to every other state with the same rate are called similar. By partitioning states into similarity classes, the algebraic derivation of the transition matrix can be simplified, using hidden holding times and lumped Markov chains. When the rate matrix is reversible, the transition matrix is explicitly related in an intuitive way to that of the lumped chain. The theory provides a unified derivation for a whole range of useful DNA base substitution models, and a number of amino acid substitution models.

2009 ◽  
Vol 46 (02) ◽  
pp. 497-506 ◽  
Author(s):  
V. B. Yap

In a homogeneous continuous-time Markov chain on a finite state space, two states that jump to every other state with the same rate are called similar. By partitioning states into similarity classes, the algebraic derivation of the transition matrix can be simplified, using hidden holding times and lumped Markov chains. When the rate matrix is reversible, the transition matrix is explicitly related in an intuitive way to that of the lumped chain. The theory provides a unified derivation for a whole range of useful DNA base substitution models, and a number of amino acid substitution models.


1967 ◽  
Vol 4 (1) ◽  
pp. 192-196 ◽  
Author(s):  
J. N. Darroch ◽  
E. Seneta

In a recent paper, the authors have discussed the concept of quasi-stationary distributions for absorbing Markov chains having a finite state space, with the further restriction of discrete time. The purpose of the present note is to summarize the analogous results when the time parameter is continuous.


2016 ◽  
Vol 53 (3) ◽  
pp. 953-956 ◽  
Author(s):  
Martin Möhle ◽  
Morihiro Notohara

AbstractAn extension of a convergence theorem for sequences of Markov chains is derived. For every positive integer N let (XN(r))r be a Markov chain with the same finite state space S and transition matrix ΠN=I+dNBN, where I is the unit matrix, Q a generator matrix, (BN)N a sequence of matrices, limN℩∞cN= limN→∞dN=0 and limN→∞cN∕dN=0. Suppose that the limits P≔limm→∞(I+dNQ)m and G≔limN→∞PBNP exist. If the sequence of initial distributions PXN(0) converges weakly to some probability measure μ, then the finite-dimensional distributions of (XN([t∕cN))t≥0 converge to those of the Markov process (Xt)t≥0 with initial distribution μ, transition matrix PetG and limN→∞(I+dNQ+cNBN)[t∕cN]


1990 ◽  
Vol 22 (04) ◽  
pp. 802-830 ◽  
Author(s):  
Frank Ball

We consider a time reversible, continuous time Markov chain on a finite state space. The state space is partitioned into two sets, termed open and closed, and it is only possible to observe whether the process is in an open or a closed state. Further, short sojourns in either the open or closed states fail to be detected. We consider the situation when the length of minimal detectable sojourns follows a negative exponential distribution with mean μ–1. We show that the probability density function of observed open sojourns takes the form , where n is the size of the state space. We present a thorough asymptotic analysis of f O(t) as μ tends to infinity. We discuss the relevance of our results to the modelling of single channel records. We illustrate the theory with a numerical example.


1972 ◽  
Vol 9 (01) ◽  
pp. 129-139 ◽  
Author(s):  
P. J. Brockwell

The distribution of the times to first emptiness and first overflow, together with the limiting distribution of content are determined for a dam of finite capacity. It is assumed that the rate of change of the level of the dam is a continuous-time Markov chain with finite state-space (suitably modified when the dam is full or empty).


1972 ◽  
Vol 9 (1) ◽  
pp. 129-139 ◽  
Author(s):  
P. J. Brockwell

The distribution of the times to first emptiness and first overflow, together with the limiting distribution of content are determined for a dam of finite capacity. It is assumed that the rate of change of the level of the dam is a continuous-time Markov chain with finite state-space (suitably modified when the dam is full or empty).


2001 ◽  
Vol 38 (1) ◽  
pp. 262-269 ◽  
Author(s):  
Geoffrey Pritchard ◽  
David J. Scott

We consider the problem of estimating the rate of convergence to stationarity of a continuous-time, finite-state Markov chain. This is done via an estimator of the second-largest eigenvalue of the transition matrix, which in turn is based on conventional inference in a parametric model. We obtain a limiting distribution for the eigenvalue estimator. As an example we treat an M/M/c/c queue, and show that the method allows us to estimate the time to stationarity τ within a time comparable to τ.


2018 ◽  
Vol 55 (4) ◽  
pp. 1025-1036 ◽  
Author(s):  
Dario Bini ◽  
Jeffrey J. Hunter ◽  
Guy Latouche ◽  
Beatrice Meini ◽  
Peter Taylor

Abstract In their 1960 book on finite Markov chains, Kemeny and Snell established that a certain sum is invariant. The value of this sum has become known as Kemeny’s constant. Various proofs have been given over time, some more technical than others. We give here a very simple physical justification, which extends without a hitch to continuous-time Markov chains on a finite state space. For Markov chains with denumerably infinite state space, the constant may be infinite and even if it is finite, there is no guarantee that the physical argument will hold. We show that the physical interpretation does go through for the special case of a birth-and-death process with a finite value of Kemeny’s constant.


2003 ◽  
Vol 40 (1) ◽  
pp. 107-122 ◽  
Author(s):  
Eilon Solan ◽  
Nicolas Vieille

We study irreducible time-homogenous Markov chains with finite state space in discrete time. We obtain results on the sensitivity of the stationary distribution and other statistical quantities with respect to perturbations of the transition matrix. We define a new closeness relation between transition matrices, and use graph-theoretic techniques, in contrast with the matrix analysis techniques previously used.


Sign in / Sign up

Export Citation Format

Share Document