Quasistationarity in continuous-time Markov chains where absorption is not certain

2000 ◽  
Vol 37 (2) ◽  
pp. 598-600
Author(s):  
S. J. Darlington ◽  
P. K. Pollett

In a recent paper [4] it was shown that, for an absorbing Markov chain where absorption is not guaranteed, the state probabilities at time t conditional on non-absorption by t generally depend on t. Conditions were derived under which there can be no initial distribution such that the conditional state probabilities are stationary. The purpose of this note is to show that these conditions can be relaxed completely: we prove, once and for all, that there are no circumstances under which a quasistationary distribution can admit a stationary conditional interpretation.

2000 ◽  
Vol 37 (02) ◽  
pp. 598-600
Author(s):  
S. J. Darlington ◽  
P. K. Pollett

In a recent paper [4] it was shown that, for an absorbing Markov chain where absorption is not guaranteed, the state probabilities at timetconditional on non-absorption bytgenerally depend ont. Conditions were derived under which there can be no initial distribution such that the conditional state probabilities are stationary. The purpose of this note is to show that these conditions can be relaxed completely: we prove, once and for all, that there arenocircumstances under which a quasistationary distribution can admit a stationary conditional interpretation.


1999 ◽  
Vol 36 (01) ◽  
pp. 268-272 ◽  
Author(s):  
P. K. Pollett

Recently, Elmes et al. (see [2]) proposed a definition of a quasistationary distribution to accommodate absorbing Markov chains for which absorption occurs with probability less than 1. We will show that the probabilistic interpretation pertaining to cases where absorption is certain (see [13]) does not hold in the present context. We prove that the state probabilities at time t conditional on absorption taking place after t, generally depend on t. Conditions are derived under which there is no initial distribution such that the conditional state probabilities are stationary.


1999 ◽  
Vol 36 (1) ◽  
pp. 268-272 ◽  
Author(s):  
P. K. Pollett

Recently, Elmes et al. (see [2]) proposed a definition of a quasistationary distribution to accommodate absorbing Markov chains for which absorption occurs with probability less than 1. We will show that the probabilistic interpretation pertaining to cases where absorption is certain (see [13]) does not hold in the present context. We prove that the state probabilities at time t conditional on absorption taking place after t, generally depend on t. Conditions are derived under which there is no initial distribution such that the conditional state probabilities are stationary.


1968 ◽  
Vol 5 (03) ◽  
pp. 669-678 ◽  
Author(s):  
Jozef L. Teugels

A general proposition is proved stating that the exponential ergodicity of a stationary Markov chain is preserved for derived Markov chains as defined by Cohen [2], [3]. An application to a certain type of continuous time Markov chains is included.


1993 ◽  
Vol 30 (3) ◽  
pp. 518-528 ◽  
Author(s):  
Frank Ball ◽  
Geoffrey F. Yeo

We consider lumpability for continuous-time Markov chains and provide a simple probabilistic proof of necessary and sufficient conditions for strong lumpability, valid in circumstances not covered by known theory. We also consider the following marginalisability problem. Let {X{t)} = {(X1(t), X2(t), · ··, Xm(t))} be a continuous-time Markov chain. Under what conditions are the marginal processes {X1(t)}, {X2(t)}, · ··, {Xm(t)} also continuous-time Markov chains? We show that this is related to lumpability and, if no two of the marginal processes can jump simultaneously, then they are continuous-time Markov chains if and only if they are mutually independent. Applications to ion channel modelling and birth–death processes are discussed briefly.


1989 ◽  
Vol 26 (3) ◽  
pp. 643-648 ◽  
Author(s):  
A. I. Zeifman

We consider a non-homogeneous continuous-time Markov chain X(t) with countable state space. Definitions of uniform and strong quasi-ergodicity are introduced. The forward Kolmogorov system for X(t) is considered as a differential equation in the space of sequences l1. Sufficient conditions for uniform quasi-ergodicity are deduced from this equation. We consider conditions of uniform and strong ergodicity in the case of proportional intensities.


1988 ◽  
Vol 2 (2) ◽  
pp. 267-268
Author(s):  
Sheldon M. Ross

In [1] an approach to approximate the transition probabilities and mean occupation times of a continuous-time Markov chain is presented. For the chain under consideration, let Pij(t) and Tij(t) denote respectively the probability that it is in state j at time t, and the total time spent in j by time t, in both cases conditional on the chain starting in state i. Also, let Y1,…, Yn be independent exponential random variables each with rate λ = n/t, which are also independent of the Markov chain.


1968 ◽  
Vol 5 (3) ◽  
pp. 669-678 ◽  
Author(s):  
Jozef L. Teugels

A general proposition is proved stating that the exponential ergodicity of a stationary Markov chain is preserved for derived Markov chains as defined by Cohen [2], [3]. An application to a certain type of continuous time Markov chains is included.


1983 ◽  
Vol 20 (03) ◽  
pp. 505-512
Author(s):  
Russell Gerrard

The classical condition for regularity of a Markov chain is extended to include semi-Markov chains. In addition, for any given semi-Markov chain, we find Markov chains which exhibit identical regularity properties. This is done either (i) by transforming the state space or, alternatively, (ii) by imposing conditions on the holding-time distributions. Brief consideration is given to the problem of extending the results to processes other than semi-Markov chains.


2015 ◽  
Vol 47 (2) ◽  
pp. 378-401 ◽  
Author(s):  
B. Eriksson ◽  
M. R. Pistorius

This paper is concerned with the solution of the optimal stopping problem associated to the value of American options driven by continuous-time Markov chains. The value-function of an American option in this setting is characterised as the unique solution (in a distributional sense) of a system of variational inequalities. Furthermore, with continuous and smooth fit principles not applicable in this discrete state-space setting, a novel explicit characterisation is provided of the optimal stopping boundary in terms of the generator of the underlying Markov chain. Subsequently, an algorithm is presented for the valuation of American options under Markov chain models. By application to a suitably chosen sequence of Markov chains, the algorithm provides an approximate valuation of an American option under a class of Markov models that includes diffusion models, exponential Lévy models, and stochastic differential equations driven by Lévy processes. Numerical experiments for a range of different models suggest that the approximation algorithm is flexible and accurate. A proof of convergence is also provided.


Sign in / Sign up

Export Citation Format

Share Document