Some explicit results for correlated random walks

1989 ◽  
Vol 26 (04) ◽  
pp. 757-766 ◽  
Author(s):  
Ram Lal ◽  
U. Narayan Bhat

In a correlated random walk (CRW) the probabilities of movement in the positive and negative direction are given by the transition probabilities of a Markov chain. The walk can be represented as a Markov chain if we use a bivariate state space, with the location of the particle and the direction of movement as the two variables. In this paper we derive explicit results for the following characteristics of the walk directly from its transition probability matrix: (i) n -step transition probabilities for the unrestricted CRW, (ii) equilibrium distribution for the CRW restricted on one side, and (iii) equilibrium distribution and first-passage characteristics for the CRW restricted on both sides (i.e., with finite state space).

1989 ◽  
Vol 26 (4) ◽  
pp. 757-766 ◽  
Author(s):  
Ram Lal ◽  
U. Narayan Bhat

In a correlated random walk (CRW) the probabilities of movement in the positive and negative direction are given by the transition probabilities of a Markov chain. The walk can be represented as a Markov chain if we use a bivariate state space, with the location of the particle and the direction of movement as the two variables. In this paper we derive explicit results for the following characteristics of the walk directly from its transition probability matrix: (i) n -step transition probabilities for the unrestricted CRW, (ii) equilibrium distribution for the CRW restricted on one side, and (iii) equilibrium distribution and first-passage characteristics for the CRW restricted on both sides (i.e., with finite state space).


1992 ◽  
Vol 22 (2) ◽  
pp. 217-223 ◽  
Author(s):  
Heikki Bonsdorff

AbstractUnder certain conditions, a Bonus-Malus system can be interpreted as a Markov chain whose n-step transition probabilities converge to a limit probability distribution. In this paper, the rate of the convergence is studied by means of the eigenvalues of the transition probability matrix of the Markov chain.


1988 ◽  
Vol 1 (3) ◽  
pp. 197-222
Author(s):  
Ram Lal ◽  
U. Narayan Bhat

A random walk describes the movement of a particle in discrete time, with the direction and the distance traversed in one step being governed by a probability distribution. In a correlated random walk (CRW) the movement follows a Markov chain and induces correlation in the state of the walk at various epochs. Then, the walk can be modelled as a bivariate Markov chain with the location of the particle and the direction of movement as the two variables. In such random walks, normally, the particle is not allowed to stay at one location from one step to the next. In this paper we derive explicit results for the following characteristics of the CRW when it is allowed to stay at the same location, directly from its transition probability matrix: (i) equilibrium solution and the fast passage probabilities for the CRW restricted on one side, and (ii) equilibrium solution and first passage characteristics for the CRW restricted on bath sides (i.e., with finite state space).


2018 ◽  
Vol 55 (3) ◽  
pp. 862-886 ◽  
Author(s):  
F. Alberto Grünbaum ◽  
Manuel D. de la Iglesia

Abstract We consider upper‒lower (UL) (and lower‒upper (LU)) factorizations of the one-step transition probability matrix of a random walk with the state space of nonnegative integers, with the condition that both upper and lower triangular matrices in the factorization are also stochastic matrices. We provide conditions on the free parameter of the UL factorization in terms of certain continued fractions such that this stochastic factorization is possible. By inverting the order of the factors (also known as a Darboux transformation) we obtain a new family of random walks where it is possible to state the spectral measures in terms of a Geronimus transformation. We repeat this for the LU factorization but without a free parameter. Finally, we apply our results in two examples; the random walk with constant transition probabilities, and the random walk generated by the Jacobi orthogonal polynomials. In both situations we obtain urn models associated with all the random walks in question.


2019 ◽  
Vol 29 (1) ◽  
pp. 59-68
Author(s):  
Artem V. Volgin

Abstract We consider the classical model of embeddings in a simple binary Markov chain with unknown transition probability matrix. We obtain conditions on the asymptotic growth of lengths of the original and embedded sequences sufficient for the consistency of the proposed statistical embedding detection test.


2004 ◽  
Vol 36 (4) ◽  
pp. 1198-1211 ◽  
Author(s):  
James Ledoux

Let (φ(Xn))n be a function of a finite-state Markov chain (Xn)n. In this article, we investigate the conditions under which the random variables φ(n) have the same distribution as Yn (for every n), where (Yn)n is a Markov chain with fixed transition probability matrix. In other words, for a deterministic function φ, we investigate the conditions under which (Xn)n is weakly lumpable for the state vector. We show that the set of all probability distributions of X0, such that (Xn)n is weakly lumpable for the state vector, can be finitely generated. The connections between our definition of lumpability and the usual one (i.e. as the proportional dynamics property) are discussed.


1968 ◽  
Vol 5 (01) ◽  
pp. 220-223 ◽  
Author(s):  
Barry C. Arnold

Let {X(n): n = 0, 1, 2, …} be a Markov chain with state space {0, 1, 2, …, N} and transition probability matrix P = (pij ).


2004 ◽  
Vol 36 (04) ◽  
pp. 1198-1211
Author(s):  
James Ledoux

Let (φ(X n )) n be a function of a finite-state Markov chain (X n ) n . In this article, we investigate the conditions under which the random variables φ( n ) have the same distribution as Y n (for every n), where (Y n ) n is a Markov chain with fixed transition probability matrix. In other words, for a deterministic function φ, we investigate the conditions under which (X n ) n is weakly lumpable for the state vector. We show that the set of all probability distributions of X 0, such that (X n ) n is weakly lumpable for the state vector, can be finitely generated. The connections between our definition of lumpability and the usual one (i.e. as the proportional dynamics property) are discussed.


1982 ◽  
Vol 19 (03) ◽  
pp. 685-691 ◽  
Author(s):  
Atef M. Abdel-moneim ◽  
Frederick W. Leysieffer

Criteria are given to determine whether a given finite Markov chain can be lumped weakly with respect to a given partition of its state space. These conditions are given in terms of solution classes of systems of linear equations associated with the transition probability matrix of the Markov chain and the given partition.


1982 ◽  
Vol 19 (3) ◽  
pp. 685-691 ◽  
Author(s):  
Atef M. Abdel-moneim ◽  
Frederick W. Leysieffer

Criteria are given to determine whether a given finite Markov chain can be lumped weakly with respect to a given partition of its state space. These conditions are given in terms of solution classes of systems of linear equations associated with the transition probability matrix of the Markov chain and the given partition.


Sign in / Sign up

Export Citation Format

Share Document