scholarly journals Invariant measures of interacting particle systems: Algebraic aspects

2020 ◽  
Vol 24 ◽  
pp. 526-580
Author(s):  
Luis Fredes ◽  
Jean-François Marckert

Consider a continuous time particle system ηt = (ηt(k), k ∈ 𝕃), indexed by a lattice 𝕃 which will be either ℤ, ℤ∕nℤ, a segment {1, ⋯ , n}, or ℤd, and taking its values in the set Eκ𝕃 where Eκ = {0, ⋯ , κ − 1} for some fixed κ ∈{∞, 2, 3, ⋯ }. Assume that the Markovian evolution of the particle system (PS) is driven by some translation invariant local dynamics with bounded range, encoded by a jump rate matrix ⊤. These are standard settings, satisfied by the TASEP, the voter models, the contact processes. The aim of this paper is to provide some sufficient and/or necessary conditions on the matrix ⊤ so that this Markov process admits some simple invariant distribution, as a product measure (if 𝕃 is any of the spaces mentioned above), the law of a Markov process indexed by ℤ or [1, n] ∩ ℤ (if 𝕃 = ℤ or {1, …, n}), or a Gibbs measure if 𝕃 = ℤ/nℤ. Multiple applications follow: efficient ways to find invariant Markov laws for a given jump rate matrix or to prove that none exists. The voter models and the contact processes are shown not to possess any Markov laws as invariant distribution (for any memory m). (As usual, a random process X indexed by ℤ or ℕ is said to be a Markov chain with memory m ∈ {0, 1, 2, ⋯ } if ℙ(Xk ∈ A | Xk−i, i ≥ 1) = ℙ(Xk ∈ A | Xk−i, 1 ≤ i ≤ m), for any k.) We also prove that some models close to these models do. We exhibit PS admitting hidden Markov chains as invariant distribution and design many PS on ℤ2, with jump rates indexed by 2 × 2 squares, admitting product invariant measures.


1993 ◽  
Vol 6 (4) ◽  
pp. 385-406 ◽  
Author(s):  
N. U. Ahmed ◽  
Xinhong Ding

We consider a nonlinear (in the sense of McKean) Markov process described by a stochastic differential equations in Rd. We prove the existence and uniqueness of invariant measures of such process.



1998 ◽  
Vol 35 (03) ◽  
pp. 517-536 ◽  
Author(s):  
R. L. Tweedie

Let P be the transition matrix of a positive recurrent Markov chain on the integers, with invariant distribution π. If (n) P denotes the n x n ‘northwest truncation’ of P, it is known that approximations to π(j)/π(0) can be constructed from (n) P, but these are known to converge to the probability distribution itself in special cases only. We show that such convergence always occurs for three further general classes of chains, geometrically ergodic chains, stochastically monotone chains, and those dominated by stochastically monotone chains. We show that all ‘finite’ perturbations of stochastically monotone chains can be considered to be dominated by such chains, and thus the results hold for a much wider class than is first apparent. In the cases of uniformly ergodic chains, and chains dominated by irreducible stochastically monotone chains, we find practical bounds on the accuracy of the approximations.



1989 ◽  
Vol 21 (01) ◽  
pp. 159-180 ◽  
Author(s):  
Bhaskar Sengupta

This paper is concerned with a bivariate Markov process {Xt, Nt ; t ≧ 0} with a special structure. The process Xt may either increase linearly or have jump (downward) discontinuities. The process Xt takes values in [0,∞) and Nt takes a finite number of values. With these and additional assumptions, we show that the steady state joint probability distribution of {Xt, Nt ; t ≧ 0} has a matrix-exponential form. A rate matrix T (which is crucial in determining the joint distribution) is the solution of a non-linear matrix integral equation. The work in this paper is a continuous analog of matrix-geometric methods, which have gained widespread use of late. Using this theory, we present a new and considerably simplified characterization of the waiting time and queue length distributions in a GI/PH/1 queue. Finally, we show that the Markov process can be used to study an inventory system subject to seasonal fluctuations in supply and demand.



1979 ◽  
Vol 11 (02) ◽  
pp. 355-383 ◽  
Author(s):  
Richard Durrett

The models under consideration are a class of infinite particle systems which can be written as a superposition of branching random walks. This paper gives some results about the limiting behavior of the number of particles in a compact set ast→ ∞ and also gives both sufficient and necessary conditions for the existence of a non-trivial translation-invariant stationary distribution.



1989 ◽  
Vol 26 (03) ◽  
pp. 524-531 ◽  
Author(s):  
Barry C. Arnold ◽  
C. A. Robertson

A stochastic model is presented which yields a stationary Markov process whose invariant distribution is logistic. The model is autoregressive in character and is closely related to the autoregressive Pareto processes introduced earlier by Yeh et al. (1988). The model may be constructed to have absolutely continuous joint distributions. Analogous higher-order autoregressive and moving average processes may be constructed.



2013 ◽  
Vol 45 (04) ◽  
pp. 1083-1110 ◽  
Author(s):  
Sergey Foss ◽  
Stan Zachary

Many regenerative arguments in stochastic processes use random times which are akin to stopping times, but which are determined by the future as well as the past behaviour of the process of interest. Such arguments based on ‘conditioning on the future’ are usually developed in an ad-hoc way in the context of the application under consideration, thereby obscuring the underlying structure. In this paper we give a simple, unified, and more general treatment of such conditioning theory. We further give a number of novel applications to various particle system models, in particular to various flavours of contact processes and to infinite-bin models. We give a number of new results for existing and new models. We further make connections with the theory of Harris ergodicity.



1998 ◽  
Vol 35 (3) ◽  
pp. 633-641 ◽  
Author(s):  
Yoshiaki Itoh ◽  
Colin Mallows ◽  
Larry Shepp

We introduce a new class of interacting particle systems on a graph G. Suppose initially there are Ni(0) particles at each vertex i of G, and that the particles interact to form a Markov chain: at each instant two particles are chosen at random, and if these are at adjacent vertices of G, one particle jumps to the other particle's vertex, each with probability 1/2. The process N enters a death state after a finite time when all the particles are in some independent subset of the vertices of G, i.e. a set of vertices with no edges between any two of them. The problem is to find the distribution of the death state, ηi = Ni(∞), as a function of Ni(0).We are able to obtain, for some special graphs, the limiting distribution of Ni if the total number of particles N → ∞ in such a way that the fraction, Ni(0)/S = ξi, at each vertex is held fixed as N → ∞. In particular we can obtain the limit law for the graph S2, the two-leaf star which has three vertices and two edges.



1999 ◽  
Vol 31 (3) ◽  
pp. 819-838 ◽  
Author(s):  
D. Crişan ◽  
P. Del Moral ◽  
T. J. Lyons

In this paper we consider the continuous-time filtering problem and we estimate the order of convergence of an interacting particle system scheme presented by the authors in previous works. We will discuss how the discrete time approximating model of the Kushner-Stratonovitch equation and the genetic type interacting particle system approximation combine. We present quenched error bounds as well as mean order convergence results.



2000 ◽  
Vol 37 (01) ◽  
pp. 118-125
Author(s):  
Raúl Gouet ◽  
F. Javier López ◽  
Gerardo Sanz

The estimation of critical values is one of the most interesting problems in the study of interacting particle systems. The bounds obtained analytically are not usually very tight and, therefore, computer simulation has been proved to be very useful in the estimation of these values. In this paper we present a new method for the estimation of critical values in any interacting particle system with an absorbing state. The method, based on the asymptotic behaviour of the absorption time of the process, is very easy to implement and provides good estimates. It can also be applied to processes different from particle systems.



Sign in / Sign up

Export Citation Format

Share Document