The behavior of the ratio of a small-noise Markov chain to its deterministic approximation

1985 ◽  
Vol 17 (04) ◽  
pp. 731-747
Author(s):  
Norman Kaplan ◽  
Thomas Darden

For each N≧1, let {XN(t, x), t≧0} be a discrete-time stochastic process with XN (0) = x. Let FN (y) = E(XN (t + 1) | XN (t) = y), and define YN (t, x) = FN(YN(t – 1, x)), t≧1 and YN (0, x) = x. Assume that in a neighborhood of the origin FN (y) = mNy(l + O(y)) where mN > 1, and define for δ> 0 and x> 0, υ N (δ, x) = inf{t:xmt N >δ}. Conditions are given under which, for θ> 0 and ε> 0, there exist constants δ > 0 and L <∞, depending on εand 0, such that This result together with a result of Kurtz (1970), (1971) shows that, under appropriate conditions, the time needed for the stochastic process {XN (t, 1/N), t≧0} to escape a δ -neighborhood of the origin is of order log Νδ /log mN . To illustrate the results the Wright-Fisher model with selection is considered.

1985 ◽  
Vol 17 (4) ◽  
pp. 731-747
Author(s):  
Norman Kaplan ◽  
Thomas Darden

For each N≧1, let {XN(t, x), t≧0} be a discrete-time stochastic process with XN(0) = x. Let FN(y) = E(XN(t + 1) | XN(t) = y), and define YN(t, x) = FN(YN(t – 1, x)), t≧1 and YN(0, x) = x. Assume that in a neighborhood of the origin FN(y) = mNy(l + O(y)) where mN> 1, and define for δ> 0 and x> 0, υN(δ, x) = inf{t:xmtN>δ}. Conditions are given under which, for θ> 0 and ε> 0, there exist constants δ > 0 and L <∞, depending on εand 0, such that This result together with a result of Kurtz (1970), (1971) shows that, under appropriate conditions, the time needed for the stochastic process {XN(t, 1/N), t≧0} to escape a δ -neighborhood of the origin is of order log Νδ /log mN. To illustrate the results the Wright-Fisher model with selection is considered.


2020 ◽  
Vol 24 ◽  
pp. 718-738
Author(s):  
Thi Phuong Thuy Vo

The discovery of the “hidden population”, whose size and membership are unknown, is made possible by assuming that its members are connected in a social network by their relationships. We explore these groups by a chain-referral sampling (CRS) method, where participants recommend the people they know. This leads to the study of a Markov chain on a random graph where vertices represent individuals and edges connecting any two nodes describe the relationships between corresponding people. We are interested in the study of CRS process on the stochastic block model (SBM), which extends the well-known Erdös-Rényi graphs to populations partitioned into communities. The SBM considered here is characterized by a number of vertices N, a number of communities (blocks) m, proportion of each community π = (π1, …, πm) and a pattern for connection between blocks P = (λkl∕N)(k,l)∈{1,…,m}2. In this paper, we give a precise description of the dynamic of CRS process in discrete time on an SBM. The difficulty lies in handling the heterogeneity of the graph. We prove that when the population’s size is large, the normalized stochastic process of the referral chain behaves like a deterministic curve which is the unique solution of a system of ODEs.


1987 ◽  
Vol 24 (02) ◽  
pp. 347-354 ◽  
Author(s):  
Guy Fayolle ◽  
Rudolph Iasnogorodski

In this paper, we present some simple new criteria for the non-ergodicity of a stochastic process (Yn ), n ≧ 0 in discrete time, when either the upward or downward jumps are majorized by i.i.d. random variables. This situation is encountered in many practical situations, where the (Yn ) are functionals of some Markov chain with countable state space. An application to the exponential back-off protocol is described.


1960 ◽  
Vol 12 ◽  
pp. 278-288 ◽  
Author(s):  
John Lamperti

Throughout this paper, the symbol P = [Pij] will represent the transition probability matrix of an irreducible, null-recurrent Markov process in discrete time. Explanation of this terminology and basic facts about such chains may be found in (6, ch. 15). It is known (3) that for each such matrix P there is a unique (except for a positive scalar multiple) positive vector Q = {qi} such that QP = Q, or1this vector is often called the "invariant measure" of the Markov chain.The first problem to be considered in this paper is that of determining for which vectors U(0) = {μi(0)} the vectors U(n) converge, or are summable, to the invariant measure Q, where U(n) = U(0)Pn has components2In § 2, this problem is attacked for general P. The main result is a negative one, and shows how to form U(0) for which U(n) will not be (termwise) Abel summable.


1987 ◽  
Vol 24 (2) ◽  
pp. 347-354 ◽  
Author(s):  
Guy Fayolle ◽  
Rudolph Iasnogorodski

In this paper, we present some simple new criteria for the non-ergodicity of a stochastic process (Yn), n ≧ 0 in discrete time, when either the upward or downward jumps are majorized by i.i.d. random variables. This situation is encountered in many practical situations, where the (Yn) are functionals of some Markov chain with countable state space. An application to the exponential back-off protocol is described.


1971 ◽  
Vol 8 (02) ◽  
pp. 381-390 ◽  
Author(s):  
P. J. Pedler

Consider first a Markov chain with two ergodic states E 1 and E 2, and discrete time parameter set {0, 1, 2, ···, n}. Define the random variables Z 0, Z 1, Z 2, ···, Zn by then the conditional probabilities for k = 1,2,···, n, are independent of k. Thus the matrix of transition probabilities is


1971 ◽  
Vol 8 (2) ◽  
pp. 381-390 ◽  
Author(s):  
P. J. Pedler

Consider first a Markov chain with two ergodic states E1 and E2, and discrete time parameter set {0, 1, 2, ···, n}. Define the random variables Z0, Z1, Z2, ···, Znby then the conditional probabilities for k = 1,2,···, n, are independent of k. Thus the matrix of transition probabilities is


Author(s):  
J. Keilson ◽  
D. M. G. Wishart

We shall be concerned in this paper with a class of temporally homogeneous Markov processes, {R(t), X(t)}, in discrete or continuous time taking values in the spaceThe marginal process {X(t)} in discrete time is, in the terminology of Miller (10), a sequence of random variables defined on a finite Markov chain. Probability measures associated with these processes are vectors of the formwhereWe shall call a vector of the form of (0·2) a vector distribution.


Author(s):  
M. Saburov

A linear Markov chain is a discrete time stochastic process whose transitions depend only on the current state of the process. A nonlinear Markov chain is a discrete time stochastic process whose transitions may depend on both the current state and the current distribution of the process. These processes arise naturally in the study of the limit behavior of a large number of weakly interacting Markov processes. The nonlinear Markov processes were introduced by McKean and have been extensively studied in the context of nonlinear Chapman-Kolmogorov equations as well as nonlinear Fokker-Planck equations. The nonlinear Markov chain over a finite state space can be identified by a continuous mapping (a nonlinear Markov operator) defined on a set of all probability distributions (which is a simplex) of the finite state space and by a family of transition matrices depending on occupation probability distributions of states. Particularly, a linear Markov operator is a linear operator associated with a square stochastic matrix. It is well-known that a linear Markov operator is a surjection of the simplex if and only if it is a bijection. The similar problem was open for a nonlinear Markov operator associated with a stochastic hyper-matrix. We solve it in this paper. Namely, we show that a nonlinear Markov operator associated with a stochastic hyper-matrix is a surjection of the simplex if and only if it is a permutation of the Lotka-Volterra operator.


Sign in / Sign up

Export Citation Format

Share Document