On the age distribution of a Markov chain

1978 ◽  
Vol 15 (01) ◽  
pp. 65-77 ◽  
Author(s):  
Anthony G. Pakes

This paper develops the notion of the limiting age of an absorbing Markov chain, conditional on the present state. Chains with a single absorbing state {0} are considered and with such a chain can be associated a return chain,obtained by restarting the original chain at a fixed state after each absorption. The limiting age,A(j), is the weak limit of the timegivenXn=j(n → ∞).A criterion for the existence of this limit is given and this is shown to be fulfilled in the case of the return chains constructed from the Galton–Watson process and the left-continuous random walk. Limit theorems forA(J) (J →∞) are given for these examples.

1978 ◽  
Vol 15 (1) ◽  
pp. 65-77 ◽  
Author(s):  
Anthony G. Pakes

This paper develops the notion of the limiting age of an absorbing Markov chain, conditional on the present state. Chains with a single absorbing state {0} are considered and with such a chain can be associated a return chain, obtained by restarting the original chain at a fixed state after each absorption. The limiting age, A(j), is the weak limit of the time given Xn = j (n → ∞).A criterion for the existence of this limit is given and this is shown to be fulfilled in the case of the return chains constructed from the Galton–Watson process and the left-continuous random walk. Limit theorems for A (J) (J → ∞) are given for these examples.


1973 ◽  
Vol 10 (1) ◽  
pp. 39-53 ◽  
Author(s):  
A. G. Pakes

The present work considers a left-continuous random walk moving on the positive integers and having an absorbing state at the origin. Limit theorems are derived for the position of the walk at time n given: (a) absorption does not occur until after n, or (b) absorption does not occur until after m + n where m is very large, or (c) absorption occurs at m + n. A limit theorem is given for an R-positive recurrent Markov chain on the non-negative integers with an absorbing origin and subject to condition (c) above.


1973 ◽  
Vol 10 (01) ◽  
pp. 39-53 ◽  
Author(s):  
A. G. Pakes

The present work considers a left-continuous random walk moving on the positive integers and having an absorbing state at the origin. Limit theorems are derived for the position of the walk at time n given: (a) absorption does not occur until after n, or (b) absorption does not occur until after m + n where m is very large, or (c) absorption occurs at m + n. A limit theorem is given for an R-positive recurrent Markov chain on the non-negative integers with an absorbing origin and subject to condition (c) above.


2011 ◽  
Vol 43 (3) ◽  
pp. 782-813 ◽  
Author(s):  
M. Jara ◽  
T. Komorowski

In this paper we consider the scaled limit of a continuous-time random walk (CTRW) based on a Markov chain {Xn,n≥ 0} and two observables, τ(∙) andV(∙), corresponding to the renewal times and jump sizes. Assuming that these observables belong to the domains of attraction of some stable laws, we give sufficient conditions on the chain that guarantee the existence of the scaled limits for CTRWs. An application of the results to a process that arises in quantum transport theory is provided. The results obtained in this paper generalize earlier results contained in Becker-Kern, Meerschaert and Scheffler (2004) and Meerschaert and Scheffler (2008), and the recent results of Henry and Straka (2011) and Jurlewicz, Kern, Meerschaert and Scheffler (2010), where {Xn,n≥ 0} is a sequence of independent and identically distributed random variables.


2019 ◽  
Vol 23 ◽  
pp. 739-769
Author(s):  
Paweł Lorek

For a given absorbing Markov chain X* on a finite state space, a chain X is a sharp antidual of X* if the fastest strong stationary time (FSST) of X is equal, in distribution, to the absorption time of X*. In this paper, we show a systematic way of finding such an antidual based on some partial ordering of the state space. We use a theory of strong stationary duality developed recently for Möbius monotone Markov chains. We give several sharp antidual chains for Markov chain corresponding to a generalized coupon collector problem. As a consequence – utilizing known results on the limiting distribution of the absorption time – we indicate separation cutoffs (with their window sizes) in several chains. We also present a chain which (under some conditions) has a prescribed stationary distribution and its FSST is distributed as a prescribed mixture of sums of geometric random variables.


1978 ◽  
Vol 15 (2) ◽  
pp. 292-299 ◽  
Author(s):  
Anthony G. Pakes

In a recent paper Green (1976) obtained some conditional limit theorems for the absorption time of left-continuous random walk. His methods require that in the driftless case the increment distribution has exponentially decreasing tails and that the same is true for a transformed distribution in the case of negative drift.Here we take a different approach which will produce Green's results under minimal conditions. Limit theorems are given for the maximum as the initial position of the random walk tends to infinity.


1998 ◽  
Vol 30 (03) ◽  
pp. 711-722 ◽  
Author(s):  
Krishna B. Athreya ◽  
Hye-Jeong Kang

In this paper we consider a Galton-Watson process in which particles move according to a positive recurrent Markov chain on a general state space. We prove a law of large numbers for the empirical position distribution and also discuss the rate of this convergence.


1992 ◽  
Vol 29 (01) ◽  
pp. 21-36 ◽  
Author(s):  
Masaaki Kijima

Let {Xn, n= 0, 1, 2, ···} be a transient Markov chain which, when restricted to the state space 𝒩+= {1, 2, ···}, is governed by an irreducible, aperiodic and strictly substochastic matrix𝐏= (pij), and letpij(n) =P∈Xn=j, Xk∈ 𝒩+fork= 0, 1, ···,n|X0=i],i, j𝒩+. The prime concern of this paper is conditions for the existence of the limits,qijsay, ofasn →∞. Ifthe distribution (qij) is called the quasi-stationary distribution of {Xn} and has considerable practical importance. It will be shown that, under some conditions, if a non-negative non-trivial vectorx= (xi) satisfyingrxT=xT𝐏andexists, whereris the convergence norm of𝐏, i.e.r=R–1andand T denotes transpose, then it is unique, positive elementwise, andqij(n) necessarily converge toxjasn →∞.Unlike existing results in the literature, our results can be applied even to theR-null andR-transient cases. Finally, an application to a left-continuous random walk whose governing substochastic matrix isR-transient is discussed to demonstrate the usefulness of our results.


1992 ◽  
Vol 29 (1) ◽  
pp. 21-36 ◽  
Author(s):  
Masaaki Kijima

Let {Xn, n = 0, 1, 2, ···} be a transient Markov chain which, when restricted to the state space 𝒩 + = {1, 2, ···}, is governed by an irreducible, aperiodic and strictly substochastic matrix 𝐏 = (pij), and let pij(n) = P ∈ Xn = j, Xk ∈ 𝒩+ for k = 0, 1, ···, n | X0 = i], i, j 𝒩 +. The prime concern of this paper is conditions for the existence of the limits, qij say, of as n →∞. If the distribution (qij) is called the quasi-stationary distribution of {Xn} and has considerable practical importance. It will be shown that, under some conditions, if a non-negative non-trivial vector x = (xi) satisfying rxT = xT𝐏 and exists, where r is the convergence norm of 𝐏, i.e. r = R–1 and and T denotes transpose, then it is unique, positive elementwise, and qij(n) necessarily converge to xj as n →∞. Unlike existing results in the literature, our results can be applied even to the R-null and R-transient cases. Finally, an application to a left-continuous random walk whose governing substochastic matrix is R-transient is discussed to demonstrate the usefulness of our results.


1998 ◽  
Vol 30 (03) ◽  
pp. 693-710 ◽  
Author(s):  
Krishna B. Athreya ◽  
Hye-Jeong Kang

In this paper we consider a Galton-Watson process whose particles move according to a Markov chain with discrete state space. The Markov chain is assumed to be positive recurrent. We prove a law of large numbers for the empirical position distribution and also discuss the large deviation aspects of this convergence.


Sign in / Sign up

Export Citation Format

Share Document