scholarly journals Asymptotics of quasi-stationary distributions of small noise stochastic dynamical systems in unbounded domains

2022 ◽  
pp. 1-47
Author(s):  
Amarjit Budhiraja ◽  
Nicolas Fraiman ◽  
Adam Waterbury

Abstract We consider a collection of Markov chains that model the evolution of multitype biological populations. The state space of the chains is the positive orthant, and the boundary of the orthant is the absorbing state for the Markov chain and represents the extinction states of different population types. We are interested in the long-term behavior of the Markov chain away from extinction, under a small noise scaling. Under this scaling, the trajectory of the Markov process over any compact interval converges in distribution to the solution of an ordinary differential equation (ODE) evolving in the positive orthant. We study the asymptotic behavior of the quasi-stationary distributions (QSD) in this scaling regime. Our main result shows that, under conditions, the limit points of the QSD are supported on the union of interior attractors of the flow determined by the ODE. We also give lower bounds on expected extinction times which scale exponentially with the system size. Results of this type when the deterministic dynamical system obtained under the scaling limit is given by a discrete-time evolution equation and the dynamics are essentially in a compact space (namely, the one-step map is a bounded function) have been studied by Faure and Schreiber (2014). Our results extend these to a setting of an unbounded state space and continuous-time dynamics. The proofs rely on uniform large deviation results for small noise stochastic dynamical systems and methods from the theory of continuous-time dynamical systems. In general, QSD for Markov chains with absorbing states and unbounded state spaces may not exist. We study one basic family of binomial-Poisson models in the positive orthant where one can use Lyapunov function methods to establish existence of QSD and also to argue the tightness of the QSD of the scaled sequence of Markov chains. The results from the first part are then used to characterize the support of limit points of this sequence of QSD.

1974 ◽  
Vol 11 (4) ◽  
pp. 726-741 ◽  
Author(s):  
Richard. L. Tweedie

The quasi-stationary behaviour of a Markov chain which is φ-irreducible when restricted to a subspace of a general state space is investigated. It is shown that previous work on the case where the subspace is finite or countably infinite can be extended to general chains, and the existence of certain quasi-stationary limits as honest distributions is equivalent to the restricted chain being R-positive with the unique R-invariant measure satisfying a certain finiteness condition.


1989 ◽  
Vol 26 (3) ◽  
pp. 643-648 ◽  
Author(s):  
A. I. Zeifman

We consider a non-homogeneous continuous-time Markov chain X(t) with countable state space. Definitions of uniform and strong quasi-ergodicity are introduced. The forward Kolmogorov system for X(t) is considered as a differential equation in the space of sequences l1. Sufficient conditions for uniform quasi-ergodicity are deduced from this equation. We consider conditions of uniform and strong ergodicity in the case of proportional intensities.


1983 ◽  
Vol 20 (03) ◽  
pp. 505-512
Author(s):  
Russell Gerrard

The classical condition for regularity of a Markov chain is extended to include semi-Markov chains. In addition, for any given semi-Markov chain, we find Markov chains which exhibit identical regularity properties. This is done either (i) by transforming the state space or, alternatively, (ii) by imposing conditions on the holding-time distributions. Brief consideration is given to the problem of extending the results to processes other than semi-Markov chains.


1983 ◽  
Vol 20 (3) ◽  
pp. 505-512 ◽  
Author(s):  
Russell Gerrard

The classical condition for regularity of a Markov chain is extended to include semi-Markov chains. In addition, for any given semi-Markov chain, we find Markov chains which exhibit identical regularity properties. This is done either (i) by transforming the state space or, alternatively, (ii) by imposing conditions on the holding-time distributions. Brief consideration is given to the problem of extending the results to processes other than semi-Markov chains.


1974 ◽  
Vol 11 (04) ◽  
pp. 726-741 ◽  
Author(s):  
Richard. L. Tweedie

The quasi-stationary behaviour of a Markov chain which is φ-irreducible when restricted to a subspace of a general state space is investigated. It is shown that previous work on the case where the subspace is finite or countably infinite can be extended to general chains, and the existence of certain quasi-stationary limits as honest distributions is equivalent to the restricted chain being R-positive with the unique R-invariant measure satisfying a certain finiteness condition.


1989 ◽  
Vol 26 (03) ◽  
pp. 643-648 ◽  
Author(s):  
A. I. Zeifman

We consider a non-homogeneous continuous-time Markov chain X(t) with countable state space. Definitions of uniform and strong quasi-ergodicity are introduced. The forward Kolmogorov system for X(t) is considered as a differential equation in the space of sequences l 1 . Sufficient conditions for uniform quasi-ergodicity are deduced from this equation. We consider conditions of uniform and strong ergodicity in the case of proportional intensities.


2009 ◽  
Vol 09 (02) ◽  
pp. 187-204
Author(s):  
THOMAS R. BOUCHER ◽  
DAREN B. H. CLINE

The state-space representations of certain nonlinear autoregressive time series are general state Markov chains. The transitions of a general state Markov chain among regions in its state-space can be modeled with the transitions among states of a finite state Markov chain. Stability of the time series is then informed by the stationary distributions of the finite state Markov chain. This approach generalizes some previous results.


1995 ◽  
Vol 9 (2) ◽  
pp. 227-237 ◽  
Author(s):  
Taizhong Hu ◽  
Harry Joe

Let (X1, X2) and (Y1, Y2) be bivariate random vectors with a common marginal distribution (X1, X2) is said to be more positively dependent than (Y1, Y2) if E[h(X1)h(X2)] ≥ E[h(Y1)h(Y2)] for all functions h for which the expectations exist. The purpose of this paper is to study the monotonicity of positive dependence with time for a stationary reversible Markov chain [X1]; that is, (Xs, Xl+s) is less positively dependent as t increases. Both discrete and continuous time and both a denumerable set and a subset of the real line for the state space are considered. Some examples are given to show that the assertions established for reversible Markov chains are not true for nonreversible chains.


1967 ◽  
Vol 4 (1) ◽  
pp. 192-196 ◽  
Author(s):  
J. N. Darroch ◽  
E. Seneta

In a recent paper, the authors have discussed the concept of quasi-stationary distributions for absorbing Markov chains having a finite state space, with the further restriction of discrete time. The purpose of the present note is to summarize the analogous results when the time parameter is continuous.


1982 ◽  
Vol 19 (3) ◽  
pp. 692-694 ◽  
Author(s):  
Mark Scott ◽  
Barry C. Arnold ◽  
Dean L. Isaacson

Characterizations of strong ergodicity for Markov chains using mean visit times have been found by several authors (Huang and Isaacson (1977), Isaacson and Arnold (1978)). In this paper a characterization of uniform strong ergodicity for a continuous-time non-homogeneous Markov chain is given. This extends the characterization, using mean visit times, that was given by Isaacson and Arnold.


Sign in / Sign up

Export Citation Format

Share Document