Markov chains in small time intervals

1981 ◽  
Vol 18 (3) ◽  
pp. 747-751
Author(s):  
Stig I. Rosenlund

For a time-homogeneous continuous-parameter Markov chain we show that as t → 0 the transition probability pn,j (t) is at least of order where r(n, j) is the minimum number of jumps needed for the chain to pass from n to j. If the intensities of passage are bounded over the set of states which can be reached from n via fewer than r(n, j) jumps, this is the exact order.

1981 ◽  
Vol 18 (03) ◽  
pp. 747-751
Author(s):  
Stig I. Rosenlund

For a time-homogeneous continuous-parameter Markov chain we show that as t → 0 the transition probability pn,j (t) is at least of order where r(n, j) is the minimum number of jumps needed for the chain to pass from n to j. If the intensities of passage are bounded over the set of states which can be reached from n via fewer than r(n, j) jumps, this is the exact order.


Author(s):  
Peter L. Chesson

AbstractRandom transition probability matrices with stationary independent factors define “white noise” environment processes for Markov chains. Two examples are considered in detail. Such environment processes can be used to construct several Markov chains which are dependent, have the same transition probabilities and are jointly a Markov chain. Transition rates for such processes are evaluated. These results have application to the study of animal movements.


1996 ◽  
Vol 33 (3) ◽  
pp. 640-653 ◽  
Author(s):  
Tobias Rydén

An aggregated Markov chain is a Markov chain for which some states cannot be distinguished from each other by the observer. In this paper we consider the identifiability problem for such processes in continuous time, i.e. the problem of determining whether two parameters induce identical laws for the observable process or not. We also study the order of a continuous-time aggregated Markov chain, which is the minimum number of states needed to represent it. In particular, we give a lower bound on the order. As a by-product, we obtain results of this kind also for Markov-modulated Poisson processes, i.e. doubly stochastic Poisson processes whose intensities are directed by continuous-time Markov chains, and phase-type distributions, which are hitting times in finite-state Markov chains.


1991 ◽  
Vol 4 (4) ◽  
pp. 293-303
Author(s):  
P. Todorovic

Let {ξn} be a non-decreasing stochastically monotone Markov chain whose transition probability Q(.,.) has Q(x,{x})=β(x)>0 for some function β(.) that is non-decreasing with β(x)↑1 as x→+∞, and each Q(x,.) is non-atomic otherwise. A typical realization of {ξn} is a Markov renewal process {(Xn,Tn)}, where ξj=Xn, for Tn consecutive values of j, Tn geometric on {1,2,…} with parameter β(Xn). Conditions are given for Xn, to be relatively stable and for Tn to be weakly convergent.


1983 ◽  
Vol 20 (3) ◽  
pp. 482-504 ◽  
Author(s):  
C. Cocozza-Thivent ◽  
C. Kipnis ◽  
M. Roussignol

We investigate how the property of null-recurrence is preserved for Markov chains under a perturbation of the transition probability. After recalling some useful criteria in terms of the one-step transition nucleus we present two methods to determine barrier functions, one in terms of taboo potentials for the unperturbed Markov chain, and the other based on Taylor's formula.


2013 ◽  
Vol 50 (04) ◽  
pp. 918-930 ◽  
Author(s):  
Marie-Anne Guerry

When a discrete-time homogenous Markov chain is observed at time intervals that correspond to its time unit, then the transition probabilities of the chain can be estimated using known maximum likelihood estimators. In this paper we consider a situation when a Markov chain is observed on time intervals with length equal to twice the time unit of the Markov chain. The issue then arises of characterizing probability matrices whose square root(s) are also probability matrices. This characterization is referred to in the literature as the embedding problem for discrete time Markov chains. The probability matrix which has probability root(s) is called embeddable. In this paper for two-state Markov chains, necessary and sufficient conditions for embeddability are formulated and the probability square roots of the transition matrix are presented in analytic form. In finding conditions for the existence of probability square roots for (k x k) transition matrices, properties of row-normalized matrices are examined. Besides the existence of probability square roots, the uniqueness of these solutions is discussed: In the case of nonuniqueness, a procedure is introduced to identify a transition matrix that takes into account the specificity of the concrete context. In the case of nonexistence of a probability root, the concept of an approximate probability root is introduced as a solution of an optimization problem related to approximate nonnegative matrix factorization.


2019 ◽  
Vol 29 (1) ◽  
pp. 59-68
Author(s):  
Artem V. Volgin

Abstract We consider the classical model of embeddings in a simple binary Markov chain with unknown transition probability matrix. We obtain conditions on the asymptotic growth of lengths of the original and embedded sequences sufficient for the consistency of the proposed statistical embedding detection test.


2001 ◽  
Vol 162 ◽  
pp. 169-185
Author(s):  
Tokuzo Shiga ◽  
Akinobu Shimizu ◽  
Takahiro Soshi

Fractional moments of the passage-times are considered for positively recurrent Markov chains with countable state spaces. A criterion of the finiteness of the fractional moments is obtained in terms of the convergence rate of the transition probability to the stationary distribution. As an application it is proved that the passage time of a direct product process of Markov chains has the same order of the fractional moments as that of the single Markov chain.


1977 ◽  
Vol 14 (03) ◽  
pp. 621-625
Author(s):  
A. O. Pittenger

Suppose a physical process is modelled by a Markov chain with transition probability on S 1 ∪ S 2, S 1 denoting the transient states and S 2 a set of absorbing states. If v denotes the output distribution on S 2, the question arises as to what input distributions (of raw materials) on S 1 produce v. In this note we give an alternative to the formulation of Ray and Margo [2] and reduce the problem to one system of linear inequalities. An application to random walk is given and the equiprobability case examined in detail.


1997 ◽  
Vol 34 (4) ◽  
pp. 847-858 ◽  
Author(s):  
James Ledoux

We consider weak lumpability of finite homogeneous Markov chains, which is when a lumped Markov chain with respect to a partition of the initial state space is also a homogeneous Markov chain. We show that weak lumpability is equivalent to the existence of a direct sum of polyhedral cones that is positively invariant by the transition probability matrix of the original chain. It allows us, in a unified way, to derive new results on lumpability of reducible Markov chains and to obtain spectral properties associated with lumpability.


Sign in / Sign up

Export Citation Format

Share Document