scholarly journals A limit theorem for Markov chains with continuous state space

1963 ◽  
Vol 3 (3) ◽  
pp. 351-358 ◽  
Author(s):  
P. D. Finch

Let R denote the set of real numbers, B the σ-field of all Borel subsets of R. A homogeneous Markov Chain with state space a Borel subset Ω of R is a sequence {an}, n≧ 0, of random variables, taking values in Ω, with one-step transition probabilities P(1) (ξ, A) defined by for each choice of ξ, ξ0, …, ξn−1 in ω and all Borel subsets A of ω The fact that the right-hand side of (1.1) does not depend on the ξi, 0 ≧ i > n, is of course the Markovian property, the non-dependence on n is the homogeneity of the chain.

1970 ◽  
Vol 7 (3) ◽  
pp. 771-775
Author(s):  
I. V. Basawa

Let {Xk}, k = 1, 2, ··· be a sequence of random variables forming a homogeneous Markov chain on a finite state-space, S = {1, 2, ···, s}. Xk could be thought of as the state at time k of some physical system for which are the (one-step) transition probabilities. It is assumed that all the states are inter-communicating, so that the transition matrix P = ((pij)) is irreducible.


1970 ◽  
Vol 7 (03) ◽  
pp. 771-775
Author(s):  
I. V. Basawa

Let {Xk }, k = 1, 2, ··· be a sequence of random variables forming a homogeneous Markov chain on a finite state-space, S = {1, 2, ···, s}. Xk could be thought of as the state at time k of some physical system for which are the (one-step) transition probabilities. It is assumed that all the states are inter-communicating, so that the transition matrix P = ((pij )) is irreducible.


1977 ◽  
Vol 14 (02) ◽  
pp. 298-308 ◽  
Author(s):  
Peter R. Nelson

In a single-shelf library having infinitely many books B 1 , B 2 , …, the probability of selecting each book is assumed known. Books are removed one at a time and replaced in position k prior to the next removal. Books are moved either to the right or the left as is necessary to vacate position k. Those arrangements of books where after some finite position all the books are in natural order (book i occupies position i) are considered as states in an infinite Markov chain. When k > 1, we show that the chain can never be positive recurrent. When k = 1, we find the limits of ratios of one-step transition probabilities; and when k = 1 and the chain is transient, we find the Martin exit boundary.


1977 ◽  
Vol 14 (2) ◽  
pp. 298-308 ◽  
Author(s):  
Peter R. Nelson

In a single-shelf library having infinitely many books B1, B2, …, the probability of selecting each book is assumed known. Books are removed one at a time and replaced in position k prior to the next removal. Books are moved either to the right or the left as is necessary to vacate position k. Those arrangements of books where after some finite position all the books are in natural order (book i occupies position i) are considered as states in an infinite Markov chain. When k > 1, we show that the chain can never be positive recurrent. When k = 1, we find the limits of ratios of one-step transition probabilities; and when k = 1 and the chain is transient, we find the Martin exit boundary.


1968 ◽  
Vol 5 (02) ◽  
pp. 350-356 ◽  
Author(s):  
R. G. Khazanie

Consider a finite Markov process {Xn } described by the one-step transition probabilities In describing the transition probabilities in the above manner we are adopting the convention that (0)0 = 1 so that the states 0 and M are absorbing, and the states 1,2,···,M-1 are transient.


Author(s):  
Lajos Takács

A short solution is given for the urn problem proposed by Paul and Tatiana Ehrenfest in 1907.In 1907 P. and T. Ehrenfest(3) proposed an urn model for the resolution of the apparent discrepancy between irreversibility and recurrence in Boltzmann's theory of gases (2). In this model it is assumed that m balls numbered 1, 2, …, m are distributed in two boxes. We perform a series of trials. In each trial we choose a number at random among 1, 2, …, m in such a way that each number has probability 1/m. If we choose j, then we transfer the ball numbered j from one box to the other. Denote by ξn the number of balls in the first box at the end of the nth trial. Initially there are ξ0 balls in the first box. If the trials are independent, then the sequence {ξn;n = 0, 1, 2,…} forms a homogeneous Markov chain with state space I = {0, 1, 2,…, m} and transition probabilities pi,i+1 = (m − i)/m for i = 0, l,…, m − 1, Pi,i−1 = i/m for i = 1, 2,…, m, and pi,k = 0 otherwise. The problem is to determine the transition probabilitiesfor i∈I, k∈I and n = 0,1, 2,….


1965 ◽  
Vol 5 (3) ◽  
pp. 299-314 ◽  
Author(s):  
B. D. Craven

Consider a Markov process defined in discrete time t = 1, 2, 3, hellip on a state space S. The state of the Process at time time t will be specifies by a random varable Vt, taking values in S. This paper presents some results concerning the behaviour of the saquence V1, V2, V3hellip, considered as a time series. In general, S will be assumed to be a Borel subset of an h-dimensional Euclideam space, where h is finite. The results apply, in particular, to a continuous state space, taking S to be an interval of the realine, or to discrete process having finitely or enumerably many states. Certain results, which are indicated in what follows, apply also to more general (infinite-dimensional) state spaces.


Author(s):  
J. G. Mauldon

Consider a Markov chain with an enumerable infinity of states, labelled 0, 1, 2, …, whose one-step transition probabilities pij are independent of time. ThenI write and, departing slightly from the usual convention,Then it is known ((1), pp. 324–34, or (6)) that the limits πij always exist, and that


1968 ◽  
Vol 5 (2) ◽  
pp. 350-356 ◽  
Author(s):  
R. G. Khazanie

Consider a finite Markov process {Xn} described by the one-step transition probabilities In describing the transition probabilities in the above manner we are adopting the convention that (0)0 = 1 so that the states 0 and M are absorbing, and the states 1,2,···,M-1 are transient.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 729
Author(s):  
Miquel Montero

Random walks with invariant loop probabilities comprise a wide family of Markov processes with site-dependent, one-step transition probabilities. The whole family, which includes the simple random walk, emerges from geometric considerations related to the stereographic projection of an underlying geometry into a line. After a general introduction, we focus our attention on the elliptic case: random walks on a circle with built-in reflexing boundaries.


Sign in / Sign up

Export Citation Format

Share Document