On the absorption probabilities and mean time for absorption for discrete Markov chains

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nikolaos Halidias

Abstract In this note we study the probability and the mean time for absorption for discrete time Markov chains. In particular, we are interested in estimating the mean time for absorption when absorption is not certain and connect it with some other known results. Computing a suitable probability generating function, we are able to estimate the mean time for absorption when absorption is not certain giving some applications concerning the random walk. Furthermore, we investigate the probability for a Markov chain to reach a set A before reach B generalizing this result for a sequence of sets A 1 , A 2 , … , A k {A_{1},A_{2},\dots,A_{k}} .

2008 ◽  
Vol 45 (2) ◽  
pp. 472-480
Author(s):  
Daniel Tokarev

The mean time to extinction of a critical Galton-Watson process with initial population size k is shown to be asymptotically equivalent to two integral transforms: one involving the kth iterate of the probability generating function and one involving the generating function itself. Relating the growth of these transforms to the regular variation of their arguments, immediately connects statements involving the regular variation of the probability generating function, its iterates at 0, the quasistationary measures, their partial sums, and the limiting distribution of the time to extinction. In the critical case of finite variance we also give the growth of the mean time to extinction, conditioned on extinction occurring by time n.


2008 ◽  
Vol 45 (02) ◽  
pp. 472-480
Author(s):  
Daniel Tokarev

The mean time to extinction of a critical Galton-Watson process with initial population size k is shown to be asymptotically equivalent to two integral transforms: one involving the kth iterate of the probability generating function and one involving the generating function itself. Relating the growth of these transforms to the regular variation of their arguments, immediately connects statements involving the regular variation of the probability generating function, its iterates at 0, the quasistationary measures, their partial sums, and the limiting distribution of the time to extinction. In the critical case of finite variance we also give the growth of the mean time to extinction, conditioned on extinction occurring by time n.


Author(s):  
Mohammad Hossein Poursaeed

Suppose that a system is subject to a sequence of shocks which occur with probability p in any period of time [Formula: see text], and suppose that [Formula: see text] and [Formula: see text] are two critical levels ([Formula: see text]). The system fails when the time interval between two consecutive shocks is less than [Formula: see text], and the time interval bigger than [Formula: see text] has no effect on the system activity. In addition, the system fails with a probability of, say, [Formula: see text], when the time interval varies between [Formula: see text] and [Formula: see text]. Therefore, this model can be regarded as an extension of discrete time version of [Formula: see text]-shock model, and such an idea can be also applied in the extension of other shock models. The present study obtains the reliability function and the probability generating function of the system’s lifetime under this model. The present study offers some properties of the system and refers to a generalization of the new model. In addition, the mean time of the system’s failure is obtained under reduced efficiency which is created when the time between two consecutive shocks varies between [Formula: see text] and [Formula: see text] for the first time.


1981 ◽  
Vol 13 (2) ◽  
pp. 369-387 ◽  
Author(s):  
Richard D. Bourgin ◽  
Robert Cogburn

The general framework of a Markov chain in a random environment is presented and the problem of determining extinction probabilities is discussed. An efficient method for determining absorption probabilities and criteria for certain absorption are presented in the case that the environmental process is a two-state Markov chain. These results are then applied to birth and death, queueing and branching chains in random environments.


1966 ◽  
Vol 3 (02) ◽  
pp. 403-434 ◽  
Author(s):  
E. Seneta ◽  
D. Vere-Jones

Distributions appropriate to the description of long-term behaviour within an irreducible class of discrete-time denumerably infinite Markov chains are considered. The first four sections are concerned with general reslts, extending recent work on this subject. In Section 5 these are applied to the branching process, and give refinements of several well-known results. The last section deals with the semi-infinite random walk with an absorbing barrier at the origin.


1991 ◽  
Vol 28 (01) ◽  
pp. 1-8 ◽  
Author(s):  
J. Gani ◽  
Gy. Michaletzky

This paper considers a carrier-borne epidemic in continuous time with m + 1 > 2 stages of infection. The carriers U(t) follow a pure death process, mixing homogeneously with susceptibles X 0(t), and infectives Xi (t) in stages 1≦i≦m of infection. The infectives progress through consecutive stages of infection after each contact with the carriers. It is shown that under certain conditions {X 0(t), X 1(t), · ··, Xm (t) U(t); t≧0} is an (m + 2)-variate Markov chain, and the partial differential equation for its probability generating function derived. This can be solved after a transfomation of variables, and the probability of survivors at the end of the epidemic found.


1987 ◽  
Vol 1 (3) ◽  
pp. 251-264 ◽  
Author(s):  
Sheldon M. Ross

In this paper we propose a new approach for estimating the transition probabilities and mean occupation times of continuous-time Markov chains. Our approach is to approximate the probability of being in a state (or the mean time already spent in a state) at time t by the probability of being in that state (or the mean time already spent in that state) at a random time that is gamma distributed with mean t.


1983 ◽  
Vol 20 (01) ◽  
pp. 191-196 ◽  
Author(s):  
R. L. Tweedie

We give conditions under which the stationary distribution π of a Markov chain admits moments of the general form ∫ f(x)π(dx), where f is a general function; specific examples include f(x) = xr and f(x) = esx . In general the time-dependent moments of the chain then converge to the stationary moments. We show that in special cases this convergence of moments occurs at a geometric rate. The results are applied to random walk on [0, ∞).


1984 ◽  
Vol 21 (03) ◽  
pp. 567-574 ◽  
Author(s):  
Atef M. Abdel-Moneim ◽  
Frederick W. Leysieffer

Conditions under which a function of a finite, discrete-time Markov chain, X(t), is again Markov are given, when X(t) is not irreducible. These conditions are given in terms of an interrelationship between two partitions of the state space of X(t), the partition induced by the minimal essential classes of X(t) and the partition with respect to which lumping is to be considered.


1982 ◽  
Vol 19 (03) ◽  
pp. 518-531 ◽  
Author(s):  
Gunnar Blom ◽  
Daniel Thorburn

Random digits are collected one at a time until a given k -digit sequence is obtained, or, more generally, until one of several k -digit sequences is obtained. In the former case, a recursive formula is given, which determines the distribution of the waiting time until the sequence is obtained and leads to an expression for the probability generating function. In the latter case, the mean waiting time is given until one of the given sequences is obtained, or, more generally, until a fixed number of sequences have been obtained, either different sequences or not necessarily different ones. Several results are known before, but the methods of proof seem to be new.


Sign in / Sign up

Export Citation Format

Share Document