On the fluctuation of stochastically monotone Markov chains and some applications

1983 ◽  
Vol 20 (1) ◽  
pp. 178-184 ◽  
Author(s):  
Harry Cohn

A Borel–Cantelli-type property in terms of one-step transition probabilities is given for events like {|Xn+1| > a + ε, |Xn|≦a}, a and ε being two positive numbers. Applications to normed sums of i.i.d. random variables with infinite mean and branching processes in varying environment with or without immigration are derived.

1983 ◽  
Vol 20 (01) ◽  
pp. 178-184 ◽  
Author(s):  
Harry Cohn

A Borel–Cantelli-type property in terms of one-step transition probabilities is given for events like {|Xn +1| > a + ε, |Xn|≦a}, a and ε being two positive numbers. Applications to normed sums of i.i.d. random variables with infinite mean and branching processes in varying environment with or without immigration are derived.


1977 ◽  
Vol 14 (02) ◽  
pp. 298-308 ◽  
Author(s):  
Peter R. Nelson

In a single-shelf library having infinitely many books B 1 , B 2 , …, the probability of selecting each book is assumed known. Books are removed one at a time and replaced in position k prior to the next removal. Books are moved either to the right or the left as is necessary to vacate position k. Those arrangements of books where after some finite position all the books are in natural order (book i occupies position i) are considered as states in an infinite Markov chain. When k > 1, we show that the chain can never be positive recurrent. When k = 1, we find the limits of ratios of one-step transition probabilities; and when k = 1 and the chain is transient, we find the Martin exit boundary.


1977 ◽  
Vol 14 (2) ◽  
pp. 298-308 ◽  
Author(s):  
Peter R. Nelson

In a single-shelf library having infinitely many books B1, B2, …, the probability of selecting each book is assumed known. Books are removed one at a time and replaced in position k prior to the next removal. Books are moved either to the right or the left as is necessary to vacate position k. Those arrangements of books where after some finite position all the books are in natural order (book i occupies position i) are considered as states in an infinite Markov chain. When k > 1, we show that the chain can never be positive recurrent. When k = 1, we find the limits of ratios of one-step transition probabilities; and when k = 1 and the chain is transient, we find the Martin exit boundary.


Author(s):  
J. G. Mauldon

Consider a Markov chain with an enumerable infinity of states, labelled 0, 1, 2, …, whose one-step transition probabilities pij are independent of time. ThenI write and, departing slightly from the usual convention,Then it is known ((1), pp. 324–34, or (6)) that the limits πij always exist, and that


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 729
Author(s):  
Miquel Montero

Random walks with invariant loop probabilities comprise a wide family of Markov processes with site-dependent, one-step transition probabilities. The whole family, which includes the simple random walk, emerges from geometric considerations related to the stereographic projection of an underlying geometry into a line. After a general introduction, we focus our attention on the elliptic case: random walks on a circle with built-in reflexing boundaries.


1988 ◽  
Vol 20 (01) ◽  
pp. 99-111 ◽  
Author(s):  
Nico M. Van Dijk

Consider a perturbation in the one-step transition probabilities and rewards of a discrete-time Markov reward process with an unbounded one-step reward function. A perturbation estimate is derived for the finite horizon and average reward function. Results from [3] are hereby extended to the unbounded case. The analysis is illustrated for one- and two-dimensional queueing processes by an M/M/1-queue and an overflow queueing model with an error bound in the arrival rate.


1973 ◽  
Vol 10 (03) ◽  
pp. 659-665
Author(s):  
Donald C. Raffety

R-positivity theory for Markov chains is used to obtain results for random environment branching processes whose environment random variables are independent and identically distributed and whose environmental extinction probabilities are equal. For certain processes whose eventual extinction is almost sure, it is shown that the distribution of population size conditioned by non-extinction at time n tends to a left eigenvector of the transition matrix. Limiting values of other conditional probabilities are given in terms of this left eigenvector and it is shown that the probability of non-extinction at time n approaches zero geometrically as n approaches ∞. Analogous results are obtained for processes whose extinction is not almost sure.


1973 ◽  
Vol 10 (3) ◽  
pp. 659-665 ◽  
Author(s):  
Donald C. Raffety

R-positivity theory for Markov chains is used to obtain results for random environment branching processes whose environment random variables are independent and identically distributed and whose environmental extinction probabilities are equal. For certain processes whose eventual extinction is almost sure, it is shown that the distribution of population size conditioned by non-extinction at time n tends to a left eigenvector of the transition matrix. Limiting values of other conditional probabilities are given in terms of this left eigenvector and it is shown that the probability of non-extinction at time n approaches zero geometrically as n approaches ∞. Analogous results are obtained for processes whose extinction is not almost sure.


1983 ◽  
Vol 20 (3) ◽  
pp. 482-504 ◽  
Author(s):  
C. Cocozza-Thivent ◽  
C. Kipnis ◽  
M. Roussignol

We investigate how the property of null-recurrence is preserved for Markov chains under a perturbation of the transition probability. After recalling some useful criteria in terms of the one-step transition nucleus we present two methods to determine barrier functions, one in terms of taboo potentials for the unperturbed Markov chain, and the other based on Taylor's formula.


1988 ◽  
Vol 2 (2) ◽  
pp. 267-268
Author(s):  
Sheldon M. Ross

In [1] an approach to approximate the transition probabilities and mean occupation times of a continuous-time Markov chain is presented. For the chain under consideration, let Pij(t) and Tij(t) denote respectively the probability that it is in state j at time t, and the total time spent in j by time t, in both cases conditional on the chain starting in state i. Also, let Y1,…, Yn be independent exponential random variables each with rate λ = n/t, which are also independent of the Markov chain.


Sign in / Sign up

Export Citation Format

Share Document