Approximating Transition Probabilities and Mean Occupation Times in Continuous-Time Markov Chains

1987 ◽  
Vol 1 (3) ◽  
pp. 251-264 ◽  
Author(s):  
Sheldon M. Ross

In this paper we propose a new approach for estimating the transition probabilities and mean occupation times of continuous-time Markov chains. Our approach is to approximate the probability of being in a state (or the mean time already spent in a state) at time t by the probability of being in that state (or the mean time already spent in that state) at a random time that is gamma distributed with mean t.

1988 ◽  
Vol 2 (2) ◽  
pp. 267-268
Author(s):  
Sheldon M. Ross

In [1] an approach to approximate the transition probabilities and mean occupation times of a continuous-time Markov chain is presented. For the chain under consideration, let Pij(t) and Tij(t) denote respectively the probability that it is in state j at time t, and the total time spent in j by time t, in both cases conditional on the chain starting in state i. Also, let Y1,…, Yn be independent exponential random variables each with rate λ = n/t, which are also independent of the Markov chain.


1988 ◽  
Vol 2 (4) ◽  
pp. 471-474 ◽  
Author(s):  
Nico M. van Dijk

Recently, Ross [1] proposed an elegant method of approximating transition probabilities and mean occupation times in continuous-time Markov chains based upon recursively inspecting the process at exponential times. The method turned out to be amazingly efficient for the examples investigated. However, no formal rough error bound was provided. Any error bound even though robust is of practical interest in engineering (e.g., for determining truncation criteria or setting up an experiment). This note primarily aims to show that by a simple and standard comparison relation a rough error bound of the method is secured. Also, some alternative approximations are inspected.


2014 ◽  
Vol 2014 ◽  
pp. 1-5
Author(s):  
Mokaedi V. Lekgari

We investigate random-time state-dependent Foster-Lyapunov analysis on subgeometric rate ergodicity of continuous-time Markov chains (CTMCs). We are mainly concerned with making use of the available results on deterministic state-dependent drift conditions for CTMCs and on random-time state-dependent drift conditions for discrete-time Markov chains and transferring them to CTMCs.


1990 ◽  
Vol 22 (1) ◽  
pp. 111-128 ◽  
Author(s):  
P. K. Pollett ◽  
A. J. Roberts

We use the notion of an invariant manifold to describe the long-term behaviour of absorbing continuous-time Markov processes with a denumerable infinity of states. We show that there exists an invariant manifold for the forward differential equations and we are able to describe the evolution of the state probabilities on this manifold. Our approach gives rise to a new method for calculating conditional limiting distributions, one which is also appropriate for dealing with processes whose transition probabilities satisfy a system of non-linear differential equations.


1990 ◽  
Vol 22 (01) ◽  
pp. 111-128 ◽  
Author(s):  
P. K. Pollett ◽  
A. J. Roberts

We use the notion of an invariant manifold to describe the long-term behaviour of absorbing continuous-time Markov processes with a denumerable infinity of states. We show that there exists an invariant manifold for the forward differential equations and we are able to describe the evolution of the state probabilities on this manifold. Our approach gives rise to a new method for calculating conditional limiting distributions, one which is also appropriate for dealing with processes whose transition probabilities satisfy a system of non-linear differential equations.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nikolaos Halidias

Abstract In this note we study the probability and the mean time for absorption for discrete time Markov chains. In particular, we are interested in estimating the mean time for absorption when absorption is not certain and connect it with some other known results. Computing a suitable probability generating function, we are able to estimate the mean time for absorption when absorption is not certain giving some applications concerning the random walk. Furthermore, we investigate the probability for a Markov chain to reach a set A before reach B generalizing this result for a sequence of sets A 1 , A 2 , … , A k {A_{1},A_{2},\dots,A_{k}} .


Sign in / Sign up

Export Citation Format

Share Document