Regularity conditions for semi-Markov and Markov chains in continuous time

1983 ◽  
Vol 20 (03) ◽  
pp. 505-512
Author(s):  
Russell Gerrard

The classical condition for regularity of a Markov chain is extended to include semi-Markov chains. In addition, for any given semi-Markov chain, we find Markov chains which exhibit identical regularity properties. This is done either (i) by transforming the state space or, alternatively, (ii) by imposing conditions on the holding-time distributions. Brief consideration is given to the problem of extending the results to processes other than semi-Markov chains.

1983 ◽  
Vol 20 (3) ◽  
pp. 505-512 ◽  
Author(s):  
Russell Gerrard

The classical condition for regularity of a Markov chain is extended to include semi-Markov chains. In addition, for any given semi-Markov chain, we find Markov chains which exhibit identical regularity properties. This is done either (i) by transforming the state space or, alternatively, (ii) by imposing conditions on the holding-time distributions. Brief consideration is given to the problem of extending the results to processes other than semi-Markov chains.


1976 ◽  
Vol 8 (04) ◽  
pp. 737-771 ◽  
Author(s):  
R. L. Tweedie

The aim of this paper is to present a comprehensive set of criteria for classifying as recurrent, transient, null or positive the sets visited by a general state space Markov chain. When the chain is irreducible in some sense, these then provide criteria for classifying the chain itself, provided the sets considered actually reflect the status of the chain as a whole. The first part of the paper is concerned with the connections between various definitions of recurrence, transience, nullity and positivity for sets and for irreducible chains; here we also elaborate the idea of status sets for irreducible chains. In the second part we give our criteria for classifying sets. When the state space is countable, our results for recurrence, transience and positivity reduce to the classical work of Foster (1953); for continuous-valued chains they extend results of Lamperti (1960), (1963); for general spaces the positivity and recurrence criteria strengthen those of Tweedie (1975b).


1990 ◽  
Vol 4 (1) ◽  
pp. 89-116 ◽  
Author(s):  
Ushlo Sumita ◽  
Maria Rieders

A novel algorithm is developed which computes the ergodic probability vector for large Markov chains. Decomposing the state space into lumps, the algorithm generates a replacement process on each lump, where any exit from a lump is instantaneously replaced at some state in that lump. The replacement distributions are constructed recursively in such a way that, in the limit, the ergodic probability vector for a replacement process on one lump will be proportional to the ergodic probability vector of the original Markov chain restricted to that lump. Inverse matrices computed in the algorithm are of size (M – 1), where M is the number of lumps, thereby providing a substantial rank reduction. When a special structure is present, the procedure for generating the replacement distributions can be simplified. The relevance of the new algorithm to the aggregation-disaggregation algorithm of Takahashi [29] is also discussed.


1984 ◽  
Vol 21 (03) ◽  
pp. 567-574 ◽  
Author(s):  
Atef M. Abdel-Moneim ◽  
Frederick W. Leysieffer

Conditions under which a function of a finite, discrete-time Markov chain, X(t), is again Markov are given, when X(t) is not irreducible. These conditions are given in terms of an interrelationship between two partitions of the state space of X(t), the partition induced by the minimal essential classes of X(t) and the partition with respect to which lumping is to be considered.


1989 ◽  
Vol 26 (3) ◽  
pp. 643-648 ◽  
Author(s):  
A. I. Zeifman

We consider a non-homogeneous continuous-time Markov chain X(t) with countable state space. Definitions of uniform and strong quasi-ergodicity are introduced. The forward Kolmogorov system for X(t) is considered as a differential equation in the space of sequences l1. Sufficient conditions for uniform quasi-ergodicity are deduced from this equation. We consider conditions of uniform and strong ergodicity in the case of proportional intensities.


2009 ◽  
Vol 46 (03) ◽  
pp. 812-826
Author(s):  
Saul Jacka

Motivated by Feller's coin-tossing problem, we consider the problem of conditioning an irreducible Markov chain never to wait too long at 0. Denoting by τ the first time that the chain,X, waits for at least one unit of time at the origin, we consider conditioning the chain on the event (τ›T). We show that there is a weak limit asT→∞ in the cases where either the state space is finite orXis transient. We give sufficient conditions for the existence of a weak limit in other cases and show that we have vague convergence to a defective limit if the time to hit zero has a lighter tail than τ and τ is subexponential.


1989 ◽  
Vol 26 (03) ◽  
pp. 446-457 ◽  
Author(s):  
Gerardo Rubino

We analyse the conditions under which the aggregated process constructed from an homogeneous Markov chain over a given partition of the state space is also Markov homogeneous. The past work on the subject is revised and new properties are obtained.


2015 ◽  
Vol 32 (3-4) ◽  
pp. 159-176
Author(s):  
Nicole Bäuerle ◽  
Igor Gilitschenski ◽  
Uwe Hanebeck

Abstract We consider a Hidden Markov Model (HMM) where the integrated continuous-time Markov chain can be observed at discrete time points perturbed by a Brownian motion. The aim is to derive a filter for the underlying continuous-time Markov chain. The recursion formula for the discrete-time filter is easy to derive, however involves densities which are very hard to obtain. In this paper we derive exact formulas for the necessary densities in the case the state space of the HMM consists of two elements only. This is done by relating the underlying integrated continuous-time Markov chain to the so-called asymmetric telegraph process and by using recent results on this process. In case the state space consists of more than two elements we present three different ways to approximate the densities for the filter. The first approach is based on the continuous filter problem. The second approach is to derive a PDE for the densities and solve it numerically. The third approach is a crude discrete time approximation of the Markov chain. All three approaches are compared in a numerical study.


2022 ◽  
pp. 1-47
Author(s):  
Amarjit Budhiraja ◽  
Nicolas Fraiman ◽  
Adam Waterbury

Abstract We consider a collection of Markov chains that model the evolution of multitype biological populations. The state space of the chains is the positive orthant, and the boundary of the orthant is the absorbing state for the Markov chain and represents the extinction states of different population types. We are interested in the long-term behavior of the Markov chain away from extinction, under a small noise scaling. Under this scaling, the trajectory of the Markov process over any compact interval converges in distribution to the solution of an ordinary differential equation (ODE) evolving in the positive orthant. We study the asymptotic behavior of the quasi-stationary distributions (QSD) in this scaling regime. Our main result shows that, under conditions, the limit points of the QSD are supported on the union of interior attractors of the flow determined by the ODE. We also give lower bounds on expected extinction times which scale exponentially with the system size. Results of this type when the deterministic dynamical system obtained under the scaling limit is given by a discrete-time evolution equation and the dynamics are essentially in a compact space (namely, the one-step map is a bounded function) have been studied by Faure and Schreiber (2014). Our results extend these to a setting of an unbounded state space and continuous-time dynamics. The proofs rely on uniform large deviation results for small noise stochastic dynamical systems and methods from the theory of continuous-time dynamical systems. In general, QSD for Markov chains with absorbing states and unbounded state spaces may not exist. We study one basic family of binomial-Poisson models in the positive orthant where one can use Lyapunov function methods to establish existence of QSD and also to argue the tightness of the QSD of the scaled sequence of Markov chains. The results from the first part are then used to characterize the support of limit points of this sequence of QSD.


2000 ◽  
Vol 37 (2) ◽  
pp. 598-600
Author(s):  
S. J. Darlington ◽  
P. K. Pollett

In a recent paper [4] it was shown that, for an absorbing Markov chain where absorption is not guaranteed, the state probabilities at time t conditional on non-absorption by t generally depend on t. Conditions were derived under which there can be no initial distribution such that the conditional state probabilities are stationary. The purpose of this note is to show that these conditions can be relaxed completely: we prove, once and for all, that there are no circumstances under which a quasistationary distribution can admit a stationary conditional interpretation.


Sign in / Sign up

Export Citation Format

Share Document