Perturbation bounds for the stationary probabilities of a finite Markov chain

1984 ◽  
Vol 16 (04) ◽  
pp. 804-818 ◽  
Author(s):  
Moshe Haviv ◽  
Ludo Van Der Heyden

This paper discusses perturbation bounds for the stationary distribution of a finite indecomposable Markov chain. Existing bounds are reviewed. New bounds are presented which more completely exploit the stochastic features of the perturbation and which also are easily computable. Examples illustrate the tightness of the bounds and their application to bounding the error in the Simon–Ando aggregation technique for approximating the stationary distribution of a nearly completely decomposable Markov chain.

1984 ◽  
Vol 16 (4) ◽  
pp. 804-818 ◽  
Author(s):  
Moshe Haviv ◽  
Ludo Van Der Heyden

This paper discusses perturbation bounds for the stationary distribution of a finite indecomposable Markov chain. Existing bounds are reviewed. New bounds are presented which more completely exploit the stochastic features of the perturbation and which also are easily computable. Examples illustrate the tightness of the bounds and their application to bounding the error in the Simon–Ando aggregation technique for approximating the stationary distribution of a nearly completely decomposable Markov chain.


2019 ◽  
Vol 29 (08) ◽  
pp. 1431-1449
Author(s):  
John Rhodes ◽  
Anne Schilling

We show that the stationary distribution of a finite Markov chain can be expressed as the sum of certain normal distributions. These normal distributions are associated to planar graphs consisting of a straight line with attached loops. The loops touch only at one vertex either of the straight line or of another attached loop. Our analysis is based on our previous work, which derives the stationary distribution of a finite Markov chain using semaphore codes on the Karnofsky–Rhodes and McCammond expansion of the right Cayley graph of the finite semigroup underlying the Markov chain.


1991 ◽  
Vol 5 (1) ◽  
pp. 43-52 ◽  
Author(s):  
Masakiyo Miyazawa ◽  
J. George Shanthikumar

The loss probabilities of customers in the Mx/GI/1/k, GI/Mx/l/k and their related queues such as server vacation models are compared with respect to the convex order of several characteristics, for example, batch size, of the arrival or service process. In the proof, we give a characterization of a truncation expression for a stationary distribution of a finite Markov chain, which is interesting in itself.


2007 ◽  
Vol 21 (3) ◽  
pp. 381-400 ◽  
Author(s):  
Bernd Heidergott ◽  
Arie Hordijk ◽  
Miranda van Uitert

This article provides series expansions of the stationary distribution of a finite Markov chain. This leads to an efficient numerical algorithm for computing the stationary distribution of a finite Markov chain. Numerical examples are given to illustrate the performance of the algorithm.


2013 ◽  
Vol 3 (1) ◽  
pp. 1-17 ◽  
Author(s):  
Wen Li ◽  
Lin Jiang ◽  
Wai-Ki Ching ◽  
Lu-Bin Cui

AbstractMultivariate Markov chain models have previously been proposed in for studying dependent multiple categorical data sequences. For a given multivariate Markov chain model, an important problem is to study its joint stationary distribution. In this paper, we use two techniques to present some perturbation bounds for the joint stationary distribution vector of a multivariate Markov chain with s categorical sequences. Numerical examples demonstrate the stability of the model and the effectiveness of our perturbation bounds.


2021 ◽  
Author(s):  
Andrea Marin ◽  
Carla Piazza ◽  
Sabina Rossi

AbstractIn this paper, we deal with the lumpability approach to cope with the state space explosion problem inherent to the computation of the stationary performance indices of large stochastic models. The lumpability method is based on a state aggregation technique and applies to Markov chains exhibiting some structural regularity. Moreover, it allows one to efficiently compute the exact values of the stationary performance indices when the model is actually lumpable. The notion of quasi-lumpability is based on the idea that a Markov chain can be altered by relatively small perturbations of the transition rates in such a way that the new resulting Markov chain is lumpable. In this case, only upper and lower bounds on the performance indices can be derived. Here, we introduce a novel notion of quasi-lumpability, named proportional lumpability, which extends the original definition of lumpability but, differently from the general definition of quasi-lumpability, it allows one to derive exact stationary performance indices for the original process. We then introduce the notion of proportional bisimilarity for the terms of the performance process algebra PEPA. Proportional bisimilarity induces a proportional lumpability on the underlying continuous-time Markov chains. Finally, we prove some compositionality results and show the applicability of our theory through examples.


1991 ◽  
Vol 28 (1) ◽  
pp. 96-103 ◽  
Author(s):  
Daniel P. Heyman

We are given a Markov chain with states 0, 1, 2, ···. We want to get a numerical approximation of the steady-state balance equations. To do this, we truncate the chain, keeping the first n states, make the resulting matrix stochastic in some convenient way, and solve the finite system. The purpose of this paper is to provide some sufficient conditions that imply that as n tends to infinity, the stationary distributions of the truncated chains converge to the stationary distribution of the given chain. Our approach is completely probabilistic, and our conditions are given in probabilistic terms. We illustrate how to verify these conditions with five examples.


Sign in / Sign up

Export Citation Format

Share Document