Explicit criteria for several types of ergodicity of the embedded M/G/1 and GI/M/n queues

2004 ◽  
Vol 41 (03) ◽  
pp. 778-790
Author(s):  
Zhenting Hou ◽  
Yuanyuan Liu

This paper investigates the rate of convergence to the probability distribution of the embedded M/G/1 and GI/M/n queues. We introduce several types of ergodicity including l-ergodicity, geometric ergodicity, uniformly polynomial ergodicity and strong ergodicity. The usual method to prove ergodicity of a Markov chain is to check the existence of a Foster–Lyapunov function or a drift condition, while here we analyse the generating function of the first return probability directly and obtain practical criteria. Moreover, the method can be extended to M/G/1- and GI/M/1-type Markov chains.

2004 ◽  
Vol 41 (3) ◽  
pp. 778-790 ◽  
Author(s):  
Zhenting Hou ◽  
Yuanyuan Liu

This paper investigates the rate of convergence to the probability distribution of the embedded M/G/1 and GI/M/n queues. We introduce several types of ergodicity including l-ergodicity, geometric ergodicity, uniformly polynomial ergodicity and strong ergodicity. The usual method to prove ergodicity of a Markov chain is to check the existence of a Foster–Lyapunov function or a drift condition, while here we analyse the generating function of the first return probability directly and obtain practical criteria. Moreover, the method can be extended to M/G/1- and GI/M/1-type Markov chains.


1998 ◽  
Vol 35 (03) ◽  
pp. 517-536 ◽  
Author(s):  
R. L. Tweedie

Let P be the transition matrix of a positive recurrent Markov chain on the integers, with invariant distribution π. If (n) P denotes the n x n ‘northwest truncation’ of P, it is known that approximations to π(j)/π(0) can be constructed from (n) P, but these are known to converge to the probability distribution itself in special cases only. We show that such convergence always occurs for three further general classes of chains, geometrically ergodic chains, stochastically monotone chains, and those dominated by stochastically monotone chains. We show that all ‘finite’ perturbations of stochastically monotone chains can be considered to be dominated by such chains, and thus the results hold for a much wider class than is first apparent. In the cases of uniformly ergodic chains, and chains dominated by irreducible stochastically monotone chains, we find practical bounds on the accuracy of the approximations.


1998 ◽  
Vol 35 (01) ◽  
pp. 1-11 ◽  
Author(s):  
Gareth O. Roberts ◽  
Jeffrey S. Rosenthal ◽  
Peter O. Schwartz

In this paper, we consider the question of which convergence properties of Markov chains are preserved under small perturbations. Properties considered include geometric ergodicity and rates of convergence. Perturbations considered include roundoff error from computer simulation. We are motivated primarily by interest in Markov chain Monte Carlo algorithms.


1989 ◽  
Vol 21 (3) ◽  
pp. 702-704 ◽  
Author(s):  
K. S. Chan

It is known that if an irreducible and aperiodic Markov chain satisfies a ‘drift' condition in terms of a non-negative measurable function g(x), it is geometrically ergodic. See, e.g. Nummelin (1984), p. 90. We extend the analysis to show that the distance between the nth-step transition probability and the invariant probability measure is bounded above by ρ n(a + bg(x)) for some constants a, b> 0 and ρ < 1. The result is then applied to obtain convergence rates to the invariant probability measures for an autoregressive process and a random walk on a half line.


2004 ◽  
Vol 36 (01) ◽  
pp. 227-242 ◽  
Author(s):  
A. A. Borovkov ◽  
A. Hordijk

Normed ergodicity is a type of strong ergodicity for which convergence of thenth step transition operator to the stationary operator holds in the operator norm. We derive a new characterization of normed ergodicity and we clarify its relation with exponential ergodicity. The existence of a Lyapunov function together with two conditions on the uniform integrability of the increments of the Markov chain is shown to be a sufficient condition for normed ergodicity. Conversely, the sufficient conditions are also almost necessary.


1965 ◽  
Vol 2 (1) ◽  
pp. 88-100 ◽  
Author(s):  
J. N. Darroch ◽  
E. Seneta

The time to absorption from the set T of transient states of a Markov chain may be sufficiently long for the probability distribution over T to settle down in some sense to a “quasi-stationary” distribution. Various analogues of the stationary distribution of an irreducible chain are suggested and compared. The reverse process of an absorbing chain is found to be relevant.


2012 ◽  
Vol 28 (6) ◽  
pp. 1165-1185 ◽  
Author(s):  
Brendan K. Beare

We study the dependence properties of stationary Markov chains generated by Archimedean copulas. Under some simple regularity conditions, we show that regular variation of the Archimedean generator at zero and one implies geometric ergodicity of the associated Markov chain. We verify our assumptions for a range of Archimedean copulas used in applications.


1989 ◽  
Vol 21 (03) ◽  
pp. 702-704 ◽  
Author(s):  
K. S. Chan

It is known that if an irreducible and aperiodic Markov chain satisfies a ‘drift' condition in terms of a non-negative measurable function g(x), it is geometrically ergodic. See, e.g. Nummelin (1984), p. 90. We extend the analysis to show that the distance between the nth-step transition probability and the invariant probability measure is bounded above by ρ n (a + bg(x)) for some constants a, b&gt; 0 and ρ &lt; 1. The result is then applied to obtain convergence rates to the invariant probability measures for an autoregressive process and a random walk on a half line.


1994 ◽  
Vol 31 (03) ◽  
pp. 829-833 ◽  
Author(s):  
Jean B. Lasserre

We give formulas for updating both the steady-state probability distribution and the fundamental matrices of a singularly perturbed Markov chain. This formula generalizes Schweitzer's regular perturbation formulas to the case of singular perturbations.


1994 ◽  
Vol 31 (3) ◽  
pp. 829-833 ◽  
Author(s):  
Jean B. Lasserre

We give formulas for updating both the steady-state probability distribution and the fundamental matrices of a singularly perturbed Markov chain. This formula generalizes Schweitzer's regular perturbation formulas to the case of singular perturbations.


Sign in / Sign up

Export Citation Format

Share Document