FUNDAMENTAL MATRIX OF TRANSIENT QBD GENERATOR WITH FINITE STATES AND LEVEL DEPENDENT TRANSITIONS

2009 ◽  
Vol 26 (05) ◽  
pp. 697-714 ◽  
Author(s):  
YANG WOO SHIN

Fundamental matrix plays an important role in a finite-state Markov chain to find many characteristic values such as stationary distribution, expected amount of time spent in the transient state, absorption probabilities. In this paper, the fundamental matrix of the finite-state quasi-birth-and-death (QBD) process with absorbing state and level dependent transitions is considered. We show that each block component of the fundamental matrix can be expressed as a matrix product form and present an algorithm for computing the fundamental matrix. Some applications with numerical results are also presented.

2003 ◽  
Vol 17 (4) ◽  
pp. 487-501 ◽  
Author(s):  
Yang Woo Shin ◽  
Bong Dae Choi

We consider a single-server queue with exponential service time and two types of arrivals: positive and negative. Positive customers are regular ones who form a queue and a negative arrival has the effect of removing a positive customer in the system. In many applications, it might be more appropriate to assume the dependence between positive arrival and negative arrival. In order to reflect the dependence, we assume that the positive arrivals and negative arrivals are governed by a finite-state Markov chain with two absorbing states, say 0 and 0′. The epoch of absorption to the states 0 and 0′ corresponds to an arrival of positive and negative customers, respectively. The Markov chain is then instantly restarted in a transient state, where the selection of the new state is allowed to depend on the state from which absorption occurred.The Laplace–Stieltjes transforms (LSTs) of the sojourn time distribution of a customer, jointly with the probability that the customer completes his service without being removed, are derived under the combinations of service disciplines FCFS and LCFS and the removal strategies RCE and RCH. The service distribution of phase type is also considered.


1968 ◽  
Vol 5 (2) ◽  
pp. 401-413 ◽  
Author(s):  
Paul J. Schweitzer

A perturbation formalism is presented which shows how the stationary distribution and fundamental matrix of a Markov chain containing a single irreducible set of states change as the transition probabilities vary. Expressions are given for the partial derivatives of the stationary distribution and fundamental matrix with respect to the transition probabilities. Semi-group properties of the generators of transformations from one Markov chain to another are investigated. It is shown that a perturbation formalism exists in the multiple subchain case if and only if the change in the transition probabilities does not alter the number of, or intermix the various subchains. The formalism is presented when this condition is satisfied.


2007 ◽  
Vol 21 (3) ◽  
pp. 381-400 ◽  
Author(s):  
Bernd Heidergott ◽  
Arie Hordijk ◽  
Miranda van Uitert

This article provides series expansions of the stationary distribution of a finite Markov chain. This leads to an efficient numerical algorithm for computing the stationary distribution of a finite Markov chain. Numerical examples are given to illustrate the performance of the algorithm.


1968 ◽  
Vol 5 (02) ◽  
pp. 401-413 ◽  
Author(s):  
Paul J. Schweitzer

A perturbation formalism is presented which shows how the stationary distribution and fundamental matrix of a Markov chain containing a single irreducible set of states change as the transition probabilities vary. Expressions are given for the partial derivatives of the stationary distribution and fundamental matrix with respect to the transition probabilities. Semi-group properties of the generators of transformations from one Markov chain to another are investigated. It is shown that a perturbation formalism exists in the multiple subchain case if and only if the change in the transition probabilities does not alter the number of, or intermix the various subchains. The formalism is presented when this condition is satisfied.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nikolaos Halidias

Abstract In this note we study the probability and the mean time for absorption for discrete time Markov chains. In particular, we are interested in estimating the mean time for absorption when absorption is not certain and connect it with some other known results. Computing a suitable probability generating function, we are able to estimate the mean time for absorption when absorption is not certain giving some applications concerning the random walk. Furthermore, we investigate the probability for a Markov chain to reach a set A before reach B generalizing this result for a sequence of sets A 1 , A 2 , … , A k {A_{1},A_{2},\dots,A_{k}} .


1993 ◽  
Vol 25 (01) ◽  
pp. 82-102
Author(s):  
M. G. Nair ◽  
P. K. Pollett

In a recent paper, van Doorn (1991) explained how quasi-stationary distributions for an absorbing birth-death process could be determined from the transition rates of the process, thus generalizing earlier work of Cavender (1978). In this paper we shall show that many of van Doorn's results can be extended to deal with an arbitrary continuous-time Markov chain over a countable state space, consisting of an irreducible class, C, and an absorbing state, 0, which is accessible from C. Some of our results are extensions of theorems proved for honest chains in Pollett and Vere-Jones (1992). In Section 3 we prove that a probability distribution on C is a quasi-stationary distribution if and only if it is a µ-invariant measure for the transition function, P. We shall also show that if m is a quasi-stationary distribution for P, then a necessary and sufficient condition for m to be µ-invariant for Q is that P satisfies the Kolmogorov forward equations over C. When the remaining forward equations hold, the quasi-stationary distribution must satisfy a set of ‘residual equations' involving the transition rates into the absorbing state. The residual equations allow us to determine the value of µ for which the quasi-stationary distribution is µ-invariant for P. We also prove some more general results giving bounds on the values of µ for which a convergent measure can be a µ-subinvariant and then µ-invariant measure for P. The remainder of the paper is devoted to the question of when a convergent µ-subinvariant measure, m, for Q is a quasi-stationary distribution. Section 4 establishes a necessary and sufficient condition for m to be a quasi-stationary distribution for the minimal chain. In Section 5 we consider ‘single-exit' chains. We derive a necessary and sufficient condition for there to exist a process for which m is a quasi-stationary distribution. Under this condition all such processes can be specified explicitly through their resolvents. The results proved here allow us to conclude that the bounds for µ obtained in Section 3 are, in fact, tight. Finally, in Section 6, we illustrate our results by way of two examples: regular birth-death processes and a pure-birth process with absorption.


1991 ◽  
Vol 28 (1) ◽  
pp. 96-103 ◽  
Author(s):  
Daniel P. Heyman

We are given a Markov chain with states 0, 1, 2, ···. We want to get a numerical approximation of the steady-state balance equations. To do this, we truncate the chain, keeping the first n states, make the resulting matrix stochastic in some convenient way, and solve the finite system. The purpose of this paper is to provide some sufficient conditions that imply that as n tends to infinity, the stationary distributions of the truncated chains converge to the stationary distribution of the given chain. Our approach is completely probabilistic, and our conditions are given in probabilistic terms. We illustrate how to verify these conditions with five examples.


Sign in / Sign up

Export Citation Format

Share Document