Truncation approximations of invariant measures for Markov chains

1998 ◽  
Vol 35 (3) ◽  
pp. 517-536 ◽  
Author(s):  
R. L. Tweedie

Let P be the transition matrix of a positive recurrent Markov chain on the integers, with invariant distribution π. If (n)P denotes the n x n ‘northwest truncation’ of P, it is known that approximations to π(j)/π(0) can be constructed from (n)P, but these are known to converge to the probability distribution itself in special cases only. We show that such convergence always occurs for three further general classes of chains, geometrically ergodic chains, stochastically monotone chains, and those dominated by stochastically monotone chains. We show that all ‘finite’ perturbations of stochastically monotone chains can be considered to be dominated by such chains, and thus the results hold for a much wider class than is first apparent. In the cases of uniformly ergodic chains, and chains dominated by irreducible stochastically monotone chains, we find practical bounds on the accuracy of the approximations.

1998 ◽  
Vol 35 (03) ◽  
pp. 517-536 ◽  
Author(s):  
R. L. Tweedie

Let P be the transition matrix of a positive recurrent Markov chain on the integers, with invariant distribution π. If (n) P denotes the n x n ‘northwest truncation’ of P, it is known that approximations to π(j)/π(0) can be constructed from (n) P, but these are known to converge to the probability distribution itself in special cases only. We show that such convergence always occurs for three further general classes of chains, geometrically ergodic chains, stochastically monotone chains, and those dominated by stochastically monotone chains. We show that all ‘finite’ perturbations of stochastically monotone chains can be considered to be dominated by such chains, and thus the results hold for a much wider class than is first apparent. In the cases of uniformly ergodic chains, and chains dominated by irreducible stochastically monotone chains, we find practical bounds on the accuracy of the approximations.


1987 ◽  
Vol 24 (3) ◽  
pp. 600-608 ◽  
Author(s):  
Diana Gibson ◽  
E. Seneta

We consider the problem of approximating the stationary distribution of a positive-recurrent Markov chain with infinite transition matrix P, by stationary distributions computed from (n × n) stochastic matrices formed by augmenting the entries of the (n × n) northwest corner truncations of P, as n →∞.


1987 ◽  
Vol 24 (03) ◽  
pp. 600-608 ◽  
Author(s):  
Diana Gibson ◽  
E. Seneta

We consider the problem of approximating the stationary distribution of a positive-recurrent Markov chain with infinite transition matrix P, by stationary distributions computed from (n × n) stochastic matrices formed by augmenting the entries of the (n × n) northwest corner truncations of P, as n →∞.


1987 ◽  
Vol 19 (03) ◽  
pp. 739-742 ◽  
Author(s):  
J. D. Biggins

If (non-overlapping) repeats of specified sequences of states in a Markov chain are considered, the result is a Markov renewal process. Formulae somewhat simpler than those given in Biggins and Cannings (1987) are derived which can be used to obtain the transition matrix and conditional mean sojourn times in this process.


1977 ◽  
Vol 14 (02) ◽  
pp. 298-308 ◽  
Author(s):  
Peter R. Nelson

In a single-shelf library having infinitely many books B 1 , B 2 , …, the probability of selecting each book is assumed known. Books are removed one at a time and replaced in position k prior to the next removal. Books are moved either to the right or the left as is necessary to vacate position k. Those arrangements of books where after some finite position all the books are in natural order (book i occupies position i) are considered as states in an infinite Markov chain. When k > 1, we show that the chain can never be positive recurrent. When k = 1, we find the limits of ratios of one-step transition probabilities; and when k = 1 and the chain is transient, we find the Martin exit boundary.


2018 ◽  
Vol 50 (2) ◽  
pp. 645-669 ◽  
Author(s):  
Yuanyuan Liu ◽  
Wendi Li

AbstractLetPbe the transition matrix of a positive recurrent Markov chain on the integers with invariant probability vectorπT, and let(n)P̃ be a stochastic matrix, formed by augmenting the entries of the (n+ 1) x (n+ 1) northwest corner truncation ofParbitrarily, with invariant probability vector(n)πT. We derive computableV-norm bounds on the error betweenπTand(n)πTin terms of the perturbation method from three different aspects: the Poisson equation, the residual matrix, and the norm ergodicity coefficient, which we prove to be effective by showing that they converge to 0 asntends to ∞ under suitable conditions. We illustrate our results through several examples. Comparing our error bounds with the ones of Tweedie (1998), we see that our bounds are more applicable and accurate. Moreover, we also consider possible extensions of our results to continuous-time Markov chains.


2004 ◽  
Vol 41 (03) ◽  
pp. 778-790
Author(s):  
Zhenting Hou ◽  
Yuanyuan Liu

This paper investigates the rate of convergence to the probability distribution of the embedded M/G/1 and GI/M/n queues. We introduce several types of ergodicity including l-ergodicity, geometric ergodicity, uniformly polynomial ergodicity and strong ergodicity. The usual method to prove ergodicity of a Markov chain is to check the existence of a Foster–Lyapunov function or a drift condition, while here we analyse the generating function of the first return probability directly and obtain practical criteria. Moreover, the method can be extended to M/G/1- and GI/M/1-type Markov chains.


1983 ◽  
Vol 20 (01) ◽  
pp. 191-196 ◽  
Author(s):  
R. L. Tweedie

We give conditions under which the stationary distribution π of a Markov chain admits moments of the general form ∫ f(x)π(dx), where f is a general function; specific examples include f(x) = xr and f(x) = esx . In general the time-dependent moments of the chain then converge to the stationary moments. We show that in special cases this convergence of moments occurs at a geometric rate. The results are applied to random walk on [0, ∞).


1998 ◽  
Vol 30 (2) ◽  
pp. 365-384 ◽  
Author(s):  
Yiqiang Q. Zhao ◽  
Wei Li ◽  
W. John Braun

In this paper, we study Markov chains with infinite state block-structured transition matrices, whose states are partitioned into levels according to the block structure, and various associated measures. Roughly speaking, these measures involve first passage times or expected numbers of visits to certain levels without hitting other levels. They are very important and often play a key role in the study of a Markov chain. Necessary and/or sufficient conditions are obtained for a Markov chain to be positive recurrent, recurrent, or transient in terms of these measures. Results are obtained for general irreducible Markov chains as well as those with transition matrices possessing some block structure. We also discuss the decomposition or the factorization of the characteristic equations of these measures. In the scalar case, we locate the zeros of these characteristic functions and therefore use these zeros to characterize a Markov chain. Examples and various remarks are given to illustrate some of the results.


1998 ◽  
Vol 30 (03) ◽  
pp. 711-722 ◽  
Author(s):  
Krishna B. Athreya ◽  
Hye-Jeong Kang

In this paper we consider a Galton-Watson process in which particles move according to a positive recurrent Markov chain on a general state space. We prove a law of large numbers for the empirical position distribution and also discuss the rate of this convergence.


Sign in / Sign up

Export Citation Format

Share Document