Criteria for the non-ergodicity of stochastic processes: application to the exponential back-off protocol

1987 ◽  
Vol 24 (02) ◽  
pp. 347-354 ◽  
Author(s):  
Guy Fayolle ◽  
Rudolph Iasnogorodski

In this paper, we present some simple new criteria for the non-ergodicity of a stochastic process (Yn ), n ≧ 0 in discrete time, when either the upward or downward jumps are majorized by i.i.d. random variables. This situation is encountered in many practical situations, where the (Yn ) are functionals of some Markov chain with countable state space. An application to the exponential back-off protocol is described.

1987 ◽  
Vol 24 (2) ◽  
pp. 347-354 ◽  
Author(s):  
Guy Fayolle ◽  
Rudolph Iasnogorodski

In this paper, we present some simple new criteria for the non-ergodicity of a stochastic process (Yn), n ≧ 0 in discrete time, when either the upward or downward jumps are majorized by i.i.d. random variables. This situation is encountered in many practical situations, where the (Yn) are functionals of some Markov chain with countable state space. An application to the exponential back-off protocol is described.


1969 ◽  
Vol 1 (02) ◽  
pp. 123-187 ◽  
Author(s):  
Erhan Çinlar

Consider a stochastic process X(t) (t ≧ 0) taking values in a countable state space, say, {1, 2,3, …}. To be picturesque we think of X(t) as the state which a particle is in at epoch t. Suppose the particle moves from state to state in such a way that the successive states visited form a Markov chain, and that the particle stays in a given state a random amount of time depending on the state it is in as well as on the state to be visited next. Below is a possible realization of such a process.


1969 ◽  
Vol 1 (2) ◽  
pp. 123-187 ◽  
Author(s):  
Erhan Çinlar

Consider a stochastic process X(t) (t ≧ 0) taking values in a countable state space, say, {1, 2,3, …}. To be picturesque we think of X(t) as the state which a particle is in at epoch t. Suppose the particle moves from state to state in such a way that the successive states visited form a Markov chain, and that the particle stays in a given state a random amount of time depending on the state it is in as well as on the state to be visited next. Below is a possible realization of such a process.


1973 ◽  
Vol 73 (1) ◽  
pp. 119-138 ◽  
Author(s):  
Gerald S. Goodman ◽  
S. Johansen

1. SummaryWe shall consider a non-stationary Markov chain on a countable state space E. The transition probabilities {P(s, t), 0 ≤ s ≤ t <t0 ≤ ∞} are assumed to be continuous in (s, t) uniformly in the state i ε E.


1991 ◽  
Vol 5 (4) ◽  
pp. 463-475 ◽  
Author(s):  
Linn I. Sennott

A Markov decision chain with countable state space incurs two types of costs: an operating cost and a holding cost. The objective is to minimize the expected discounted operating cost, subject to a constraint on the expected discounted holding cost. The existence of an optimal randomized simple policy is proved. This is a policy that randomizes between two stationary policies, that differ in at most one state. Several examples from the control of discrete time queueing systems are discussed.


1989 ◽  
Vol 26 (3) ◽  
pp. 643-648 ◽  
Author(s):  
A. I. Zeifman

We consider a non-homogeneous continuous-time Markov chain X(t) with countable state space. Definitions of uniform and strong quasi-ergodicity are introduced. The forward Kolmogorov system for X(t) is considered as a differential equation in the space of sequences l1. Sufficient conditions for uniform quasi-ergodicity are deduced from this equation. We consider conditions of uniform and strong ergodicity in the case of proportional intensities.


1998 ◽  
Vol 12 (3) ◽  
pp. 387-391
Author(s):  
Jean B. Lasserre

Given a Markov chain on a countable state space, we present a Lyapunov (sufficient) condition for existence of an invariant probability with a geometric tail.


1978 ◽  
Vol 10 (04) ◽  
pp. 764-787
Author(s):  
J. N. McDonald ◽  
N. A. Weiss

At times n = 0, 1, 2, · · · a Poisson number of particles enter each state of a countable state space. The particles then move independently according to the transition law of a Markov chain, until their death which occurs at a random time. Several limit theorems are then proved for various functionals of this infinite particle system. In particular, laws of large numbers and central limit theorems are proved.


1973 ◽  
Vol 73 (2) ◽  
pp. 355-359 ◽  
Author(s):  
E. Arjas ◽  
T. P. Speed

Consider a real-valued random walkwhich is defined on a Markov chain {Xn: n ≥ 0} with countable state space I. We assume that a matrix Q(.) = (qij(.)) is given such that


Sign in / Sign up

Export Citation Format

Share Document