scholarly journals On the Embedding Problem for Discrete-Time Markov Chains

2013 ◽  
Vol 50 (04) ◽  
pp. 918-930 ◽  
Author(s):  
Marie-Anne Guerry

When a discrete-time homogenous Markov chain is observed at time intervals that correspond to its time unit, then the transition probabilities of the chain can be estimated using known maximum likelihood estimators. In this paper we consider a situation when a Markov chain is observed on time intervals with length equal to twice the time unit of the Markov chain. The issue then arises of characterizing probability matrices whose square root(s) are also probability matrices. This characterization is referred to in the literature as the embedding problem for discrete time Markov chains. The probability matrix which has probability root(s) is called embeddable. In this paper for two-state Markov chains, necessary and sufficient conditions for embeddability are formulated and the probability square roots of the transition matrix are presented in analytic form. In finding conditions for the existence of probability square roots for (k x k) transition matrices, properties of row-normalized matrices are examined. Besides the existence of probability square roots, the uniqueness of these solutions is discussed: In the case of nonuniqueness, a procedure is introduced to identify a transition matrix that takes into account the specificity of the concrete context. In the case of nonexistence of a probability root, the concept of an approximate probability root is introduced as a solution of an optimization problem related to approximate nonnegative matrix factorization.

2013 ◽  
Vol 50 (4) ◽  
pp. 918-930 ◽  
Author(s):  
Marie-Anne Guerry

When a discrete-time homogenous Markov chain is observed at time intervals that correspond to its time unit, then the transition probabilities of the chain can be estimated using known maximum likelihood estimators. In this paper we consider a situation when a Markov chain is observed on time intervals with length equal to twice the time unit of the Markov chain. The issue then arises of characterizing probability matrices whose square root(s) are also probability matrices. This characterization is referred to in the literature as the embedding problem for discrete time Markov chains. The probability matrix which has probability root(s) is called embeddable.In this paper for two-state Markov chains, necessary and sufficient conditions for embeddability are formulated and the probability square roots of the transition matrix are presented in analytic form. In finding conditions for the existence of probability square roots for (kxk) transition matrices, properties of row-normalized matrices are examined. Besides the existence of probability square roots, the uniqueness of these solutions is discussed: In the case of nonuniqueness, a procedure is introduced to identify a transition matrix that takes into account the specificity of the concrete context. In the case of nonexistence of a probability root, the concept of an approximate probability root is introduced as a solution of an optimization problem related to approximate nonnegative matrix factorization.


Author(s):  
Peter L. Chesson

AbstractRandom transition probability matrices with stationary independent factors define “white noise” environment processes for Markov chains. Two examples are considered in detail. Such environment processes can be used to construct several Markov chains which are dependent, have the same transition probabilities and are jointly a Markov chain. Transition rates for such processes are evaluated. These results have application to the study of animal movements.


Author(s):  
Marcel F. Neuts

We consider a stationary discrete-time Markov chain with a finite number m of possible states which we designate by 1,…,m. We assume that at time t = 0 the process is in an initial state i with probability (i = 1,…, m) and such that and .


1973 ◽  
Vol 10 (04) ◽  
pp. 891-894
Author(s):  
H. P. Wynn

The set of transient states of a Markov chain is considered as a system. If numbers of arrivals to the system at discrete time points have constant mean and covariance matrix then there is a limiting distribution of numbers in the states. Necessary and sufficient conditions are given for this distribution to yield zero correlations between states.


1971 ◽  
Vol 8 (02) ◽  
pp. 381-390 ◽  
Author(s):  
P. J. Pedler

Consider first a Markov chain with two ergodic states E 1 and E 2, and discrete time parameter set {0, 1, 2, ···, n}. Define the random variables Z 0, Z 1, Z 2, ···, Zn by then the conditional probabilities for k = 1,2,···, n, are independent of k. Thus the matrix of transition probabilities is


1971 ◽  
Vol 8 (2) ◽  
pp. 381-390 ◽  
Author(s):  
P. J. Pedler

Consider first a Markov chain with two ergodic states E1 and E2, and discrete time parameter set {0, 1, 2, ···, n}. Define the random variables Z0, Z1, Z2, ···, Znby then the conditional probabilities for k = 1,2,···, n, are independent of k. Thus the matrix of transition probabilities is


2016 ◽  
Vol 53 (1) ◽  
pp. 216-230 ◽  
Author(s):  
Walter A. F. de Carvalho ◽  
Sandro Gallo ◽  
Nancy L. Garcia

Abstract Starting from a Markov chain with a finite or a countable infinite alphabet, we consider the chain obtained when all but one symbol are indistinguishable for the practitioner. We study conditions on the transition matrix of the Markov chain ensuring that the image chain has continuous or discontinuous transition probabilities with respect to the past.


1973 ◽  
Vol 10 (4) ◽  
pp. 891-894 ◽  
Author(s):  
H. P. Wynn

The set of transient states of a Markov chain is considered as a system. If numbers of arrivals to the system at discrete time points have constant mean and covariance matrix then there is a limiting distribution of numbers in the states. Necessary and sufficient conditions are given for this distribution to yield zero correlations between states.


2001 ◽  
Vol 33 (2) ◽  
pp. 505-519 ◽  
Author(s):  
James Ledoux ◽  
Laurent Truffet

In this paper, we obtain Markovian bounds on a function of a homogeneous discrete time Markov chain. For deriving such bounds, we use well-known results on stochastic majorization of Markov chains and the Rogers–Pitman lumpability criterion. The proposed method of comparison between functions of Markov chains is not equivalent to generalized coupling method of Markov chains, although we obtain same kind of majorization. We derive necessary and sufficient conditions for existence of our Markovian bounds. We also discuss the choice of the geometric invariant related to the lumpability condition that we use.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nikolaos Halidias

Abstract In this note we study the probability and the mean time for absorption for discrete time Markov chains. In particular, we are interested in estimating the mean time for absorption when absorption is not certain and connect it with some other known results. Computing a suitable probability generating function, we are able to estimate the mean time for absorption when absorption is not certain giving some applications concerning the random walk. Furthermore, we investigate the probability for a Markov chain to reach a set A before reach B generalizing this result for a sequence of sets A 1 , A 2 , … , A k {A_{1},A_{2},\dots,A_{k}} .


Sign in / Sign up

Export Citation Format

Share Document