scholarly journals On explicit form of the stationary distributions for a class of bounded Markov chains

2016 ◽  
Vol 53 (1) ◽  
pp. 231-243 ◽  
Author(s):  
S. McKinlay ◽  
K. Borovkov

AbstractWe consider a class of discrete-time Markov chains with state space [0, 1] and the following dynamics. At each time step, first the direction of the next transition is chosen at random with probability depending on the current location. Then the length of the jump is chosen independently as a random proportion of the distance to the respective end point of the unit interval, the distributions of the proportions being fixed for each of the two directions. Chains of that kind were the subjects of a number of studies and are of interest for some applications. Under simple broad conditions, we establish the ergodicity of such Markov chains and then derive closed-form expressions for the stationary densities of the chains when the proportions are beta distributed with the first parameter equal to 1. Examples demonstrating the range of stationary distributions for processes described by this model are given, and an application to a robot coverage algorithm is discussed.

1967 ◽  
Vol 4 (1) ◽  
pp. 192-196 ◽  
Author(s):  
J. N. Darroch ◽  
E. Seneta

In a recent paper, the authors have discussed the concept of quasi-stationary distributions for absorbing Markov chains having a finite state space, with the further restriction of discrete time. The purpose of the present note is to summarize the analogous results when the time parameter is continuous.


1966 ◽  
Vol 3 (02) ◽  
pp. 403-434 ◽  
Author(s):  
E. Seneta ◽  
D. Vere-Jones

Distributions appropriate to the description of long-term behaviour within an irreducible class of discrete-time denumerably infinite Markov chains are considered. The first four sections are concerned with general reslts, extending recent work on this subject. In Section 5 these are applied to the branching process, and give refinements of several well-known results. The last section deals with the semi-infinite random walk with an absorbing barrier at the origin.


1985 ◽  
Vol 22 (01) ◽  
pp. 123-137 ◽  
Author(s):  
Hideo Ōsawa

This paper studies the reversibility conditions of stationary Markov chains (discrete-time Markov processes) with general state space. In particular, we investigate the Markov chains having atomic points in the state space. Such processes are often seen in storage models, for example waiting time in a queue, insurance risk reserve, dam content and so on. The necessary and sufficient conditions for reversibility of these processes are obtained. Further, we apply these conditions to some storage models and present some interesting results for single-server queues and a finite insurance risk model.


1984 ◽  
Vol 21 (03) ◽  
pp. 567-574 ◽  
Author(s):  
Atef M. Abdel-Moneim ◽  
Frederick W. Leysieffer

Conditions under which a function of a finite, discrete-time Markov chain, X(t), is again Markov are given, when X(t) is not irreducible. These conditions are given in terms of an interrelationship between two partitions of the state space of X(t), the partition induced by the minimal essential classes of X(t) and the partition with respect to which lumping is to be considered.


1985 ◽  
Vol 22 (1) ◽  
pp. 123-137 ◽  
Author(s):  
Hideo Ōsawa

This paper studies the reversibility conditions of stationary Markov chains (discrete-time Markov processes) with general state space. In particular, we investigate the Markov chains having atomic points in the state space. Such processes are often seen in storage models, for example waiting time in a queue, insurance risk reserve, dam content and so on. The necessary and sufficient conditions for reversibility of these processes are obtained. Further, we apply these conditions to some storage models and present some interesting results for single-server queues and a finite insurance risk model.


2000 ◽  
Vol 37 (03) ◽  
pp. 795-806 ◽  
Author(s):  
Laurent Truffet

We propose in this paper two methods to compute Markovian bounds for monotone functions of a discrete time homogeneous Markov chain evolving in a totally ordered state space. The main interest of such methods is to propose algorithms to simplify analysis of transient characteristics such as the output process of a queue, or sojourn time in a subset of states. Construction of bounds are based on two kinds of results: well-known results on stochastic comparison between Markov chains with the same state space; and the fact that in some cases a function of Markov chain is again a homogeneous Markov chain but with smaller state space. Indeed, computation of bounds uses knowledge on the whole initial model. However, only part of this data is necessary at each step of the algorithms.


1974 ◽  
Vol 11 (4) ◽  
pp. 726-741 ◽  
Author(s):  
Richard. L. Tweedie

The quasi-stationary behaviour of a Markov chain which is φ-irreducible when restricted to a subspace of a general state space is investigated. It is shown that previous work on the case where the subspace is finite or countably infinite can be extended to general chains, and the existence of certain quasi-stationary limits as honest distributions is equivalent to the restricted chain being R-positive with the unique R-invariant measure satisfying a certain finiteness condition.


2015 ◽  
Vol 47 (1) ◽  
pp. 83-105 ◽  
Author(s):  
Hiroyuki Masuyama

In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally, we discuss the application of our results to GI/G/1-type Markov chains.


2011 ◽  
Vol 2011 ◽  
pp. 1-15 ◽  
Author(s):  
Mario Lefebvre ◽  
Moussa Kounta

We consider a discrete-time Markov chain with state space {1,1+Δx,…,1+kΔx=N}. We compute explicitly the probability pj that the chain, starting from 1+jΔx, will hit N before 1, as well as the expected number dj of transitions needed to end the game. In the limit when Δx and the time Δt between the transitions decrease to zero appropriately, the Markov chain tends to a geometric Brownian motion. We show that pj and djΔt tend to the corresponding quantities for the geometric Brownian motion.


2015 ◽  
Vol 47 (01) ◽  
pp. 83-105 ◽  
Author(s):  
Hiroyuki Masuyama

In this paper we study the augmented truncation of discrete-time block-monotone Markov chains under geometric drift conditions. We first present a bound for the total variation distance between the stationary distributions of an original Markov chain and its augmented truncation. We also obtain such error bounds for more general cases, where an original Markov chain itself is not necessarily block monotone but is blockwise dominated by a block-monotone Markov chain. Finally, we discuss the application of our results to GI/G/1-type Markov chains.


Sign in / Sign up

Export Citation Format

Share Document