Some limit theorems for positive recurrent branching Markov chains: II

1998 ◽  
Vol 30 (03) ◽  
pp. 711-722 ◽  
Author(s):  
Krishna B. Athreya ◽  
Hye-Jeong Kang

In this paper we consider a Galton-Watson process in which particles move according to a positive recurrent Markov chain on a general state space. We prove a law of large numbers for the empirical position distribution and also discuss the rate of this convergence.

1998 ◽  
Vol 30 (3) ◽  
pp. 711-722 ◽  
Author(s):  
Krishna B. Athreya ◽  
Hye-Jeong Kang

In this paper we consider a Galton-Watson process in which particles move according to a positive recurrent Markov chain on a general state space. We prove a law of large numbers for the empirical position distribution and also discuss the rate of this convergence.


1998 ◽  
Vol 30 (03) ◽  
pp. 693-710 ◽  
Author(s):  
Krishna B. Athreya ◽  
Hye-Jeong Kang

In this paper we consider a Galton-Watson process whose particles move according to a Markov chain with discrete state space. The Markov chain is assumed to be positive recurrent. We prove a law of large numbers for the empirical position distribution and also discuss the large deviation aspects of this convergence.


1998 ◽  
Vol 30 (3) ◽  
pp. 693-710 ◽  
Author(s):  
Krishna B. Athreya ◽  
Hye-Jeong Kang

In this paper we consider a Galton-Watson process whose particles move according to a Markov chain with discrete state space. The Markov chain is assumed to be positive recurrent. We prove a law of large numbers for the empirical position distribution and also discuss the large deviation aspects of this convergence.


1974 ◽  
Vol 11 (3) ◽  
pp. 582-587 ◽  
Author(s):  
G. L. O'Brien

Chain-dependent processes, also called sequences of random variables defined on a Markov chain, are shown to satisfy the strong law of large numbers. A central limit theorem and a law of the iterated logarithm are given for the case when the underlying Markov chain satisfies Doeblin's hypothesis. The proofs are obtained by showing independence of the initial distribution of the chain and by then restricting attention to the stationary case.


1974 ◽  
Vol 11 (03) ◽  
pp. 582-587 ◽  
Author(s):  
G. L. O'Brien

Chain-dependent processes, also called sequences of random variables defined on a Markov chain, are shown to satisfy the strong law of large numbers. A central limit theorem and a law of the iterated logarithm are given for the case when the underlying Markov chain satisfies Doeblin's hypothesis. The proofs are obtained by showing independence of the initial distribution of the chain and by then restricting attention to the stationary case.


Author(s):  
E. Arjas ◽  
E. Nummelin ◽  
R. L. Tweedie

AbstractBy amalgamating the approaches of Tweedie (1974) and Nummelin (1977), an α-theory is developed for general semi-Markov processes. It is shown that α-transient, α-recurrent and α-positive recurrent processes can be defined, with properties analogous to those for transient, recurrent and positive recurrent processes. Limit theorems for α-positive recurrent processes follow by transforming to the probabilistic case, as in the above references: these then give results on the existence and form of quasistationary distributions, extending those of Tweedie (1975) and Nummelin (1976).


2020 ◽  
Vol 52 (4) ◽  
pp. 1127-1163
Author(s):  
Jie Yen Fan ◽  
Kais Hamza ◽  
Peter Jagers ◽  
Fima C. Klebaner

AbstractA general multi-type population model is considered, where individuals live and reproduce according to their age and type, but also under the influence of the size and composition of the entire population. We describe the dynamics of the population as a measure-valued process and obtain its asymptotics as the population grows with the environmental carrying capacity. Thus, a deterministic approximation is given, in the form of a law of large numbers, as well as a central limit theorem. This general framework is then adapted to model sexual reproduction, with a special section on serial monogamic mating systems.


1978 ◽  
Vol 15 (1) ◽  
pp. 65-77 ◽  
Author(s):  
Anthony G. Pakes

This paper develops the notion of the limiting age of an absorbing Markov chain, conditional on the present state. Chains with a single absorbing state {0} are considered and with such a chain can be associated a return chain, obtained by restarting the original chain at a fixed state after each absorption. The limiting age, A(j), is the weak limit of the time given Xn = j (n → ∞).A criterion for the existence of this limit is given and this is shown to be fulfilled in the case of the return chains constructed from the Galton–Watson process and the left-continuous random walk. Limit theorems for A (J) (J → ∞) are given for these examples.


Sign in / Sign up

Export Citation Format

Share Document