Stationary Distributions for Discrete Time Markov Chains

Author(s):  
Rinaldo B. Schinazi
1967 ◽  
Vol 4 (1) ◽  
pp. 192-196 ◽  
Author(s):  
J. N. Darroch ◽  
E. Seneta

In a recent paper, the authors have discussed the concept of quasi-stationary distributions for absorbing Markov chains having a finite state space, with the further restriction of discrete time. The purpose of the present note is to summarize the analogous results when the time parameter is continuous.


1966 ◽  
Vol 3 (02) ◽  
pp. 403-434 ◽  
Author(s):  
E. Seneta ◽  
D. Vere-Jones

Distributions appropriate to the description of long-term behaviour within an irreducible class of discrete-time denumerably infinite Markov chains are considered. The first four sections are concerned with general reslts, extending recent work on this subject. In Section 5 these are applied to the branching process, and give refinements of several well-known results. The last section deals with the semi-infinite random walk with an absorbing barrier at the origin.


2016 ◽  
Vol 53 (1) ◽  
pp. 231-243 ◽  
Author(s):  
S. McKinlay ◽  
K. Borovkov

AbstractWe consider a class of discrete-time Markov chains with state space [0, 1] and the following dynamics. At each time step, first the direction of the next transition is chosen at random with probability depending on the current location. Then the length of the jump is chosen independently as a random proportion of the distance to the respective end point of the unit interval, the distributions of the proportions being fixed for each of the two directions. Chains of that kind were the subjects of a number of studies and are of interest for some applications. Under simple broad conditions, we establish the ergodicity of such Markov chains and then derive closed-form expressions for the stationary densities of the chains when the proportions are beta distributed with the first parameter equal to 1. Examples demonstrating the range of stationary distributions for processes described by this model are given, and an application to a robot coverage algorithm is discussed.


2015 ◽  
Vol 47 (1) ◽  
pp. 83-105 ◽  
Author(s):  
Hiroyuki Masuyama

In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally, we discuss the application of our results to GI/G/1-type Markov chains.


2015 ◽  
Vol 47 (01) ◽  
pp. 83-105 ◽  
Author(s):  
Hiroyuki Masuyama

In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally, we discuss the application of our results to GI/G/1-type Markov chains.


1966 ◽  
Vol 3 (2) ◽  
pp. 403-434 ◽  
Author(s):  
E. Seneta ◽  
D. Vere-Jones

Distributions appropriate to the description of long-term behaviour within an irreducible class of discrete-time denumerably infinite Markov chains are considered. The first four sections are concerned with general reslts, extending recent work on this subject. In Section 5 these are applied to the branching process, and give refinements of several well-known results. The last section deals with the semi-infinite random walk with an absorbing barrier at the origin.


Author(s):  
Richard J. Boucherie

AbstractThis note introduces quasi-local-balance for discrete-time Markov chains with absorbing states. From quasi-local-balance product-form quasi-stationary distributions are derived by analogy with product-form stationary distributions for Markov chains that satisfy local balance.


1965 ◽  
Vol 2 (1) ◽  
pp. 88-100 ◽  
Author(s):  
J. N. Darroch ◽  
E. Seneta

The time to absorption from the set T of transient states of a Markov chain may be sufficiently long for the probability distribution over T to settle down in some sense to a “quasi-stationary” distribution. Various analogues of the stationary distribution of an irreducible chain are suggested and compared. The reverse process of an absorbing chain is found to be relevant.


Sign in / Sign up

Export Citation Format

Share Document