scholarly journals Замечание о сюръективных полиномиальных операторах

Author(s):  
M. Saburov

A linear Markov chain is a discrete time stochastic process whose transitions depend only on the current state of the process. A nonlinear Markov chain is a discrete time stochastic process whose transitions may depend on both the current state and the current distribution of the process. These processes arise naturally in the study of the limit behavior of a large number of weakly interacting Markov processes. The nonlinear Markov processes were introduced by McKean and have been extensively studied in the context of nonlinear Chapman-Kolmogorov equations as well as nonlinear Fokker-Planck equations. The nonlinear Markov chain over a finite state space can be identified by a continuous mapping (a nonlinear Markov operator) defined on a set of all probability distributions (which is a simplex) of the finite state space and by a family of transition matrices depending on occupation probability distributions of states. Particularly, a linear Markov operator is a linear operator associated with a square stochastic matrix. It is well-known that a linear Markov operator is a surjection of the simplex if and only if it is a bijection. The similar problem was open for a nonlinear Markov operator associated with a stochastic hyper-matrix. We solve it in this paper. Namely, we show that a nonlinear Markov operator associated with a stochastic hyper-matrix is a surjection of the simplex if and only if it is a permutation of the Lotka-Volterra operator.

1989 ◽  
Vol 26 (4) ◽  
pp. 744-756 ◽  
Author(s):  
Gerardo Rubino ◽  
Bruno Sericola

Sojourn times of Markov processes in subsets of the finite state space are considered. We give a closed form of the distribution of the nth sojourn time in a given subset of states. The asymptotic behaviour of this distribution when time goes to infinity is analyzed, in the discrete time and the continuous-time cases. We consider the usually pseudo-aggregated Markov process canonically constructed from the previous one by collapsing the states of each subset of a given partition. The relation between limits of moments of the sojourn time distributions in the original Markov process and the moments of the corresponding holding times of the pseudo-aggregated one is also studied.


1989 ◽  
Vol 26 (04) ◽  
pp. 744-756 ◽  
Author(s):  
Gerardo Rubino ◽  
Bruno Sericola

Sojourn times of Markov processes in subsets of the finite state space are considered. We give a closed form of the distribution of the nth sojourn time in a given subset of states. The asymptotic behaviour of this distribution when time goes to infinity is analyzed, in the discrete time and the continuous-time cases. We consider the usually pseudo-aggregated Markov process canonically constructed from the previous one by collapsing the states of each subset of a given partition. The relation between limits of moments of the sojourn time distributions in the original Markov process and the moments of the corresponding holding times of the pseudo-aggregated one is also studied.


1967 ◽  
Vol 4 (1) ◽  
pp. 192-196 ◽  
Author(s):  
J. N. Darroch ◽  
E. Seneta

In a recent paper, the authors have discussed the concept of quasi-stationary distributions for absorbing Markov chains having a finite state space, with the further restriction of discrete time. The purpose of the present note is to summarize the analogous results when the time parameter is continuous.


2014 ◽  
Vol 51 (4) ◽  
pp. 1114-1132 ◽  
Author(s):  
Bernhard C. Geiger ◽  
Christoph Temmel

A lumping of a Markov chain is a coordinatewise projection of the chain. We characterise the entropy rate preservation of a lumping of an aperiodic and irreducible Markov chain on a finite state space by the random growth rate of the cardinality of the realisable preimage of a finite-length trajectory of the lumped chain and by the information needed to reconstruct original trajectories from their lumped images. Both are purely combinatorial criteria, depending only on the transition graph of the Markov chain and the lumping function. A lumping is strongly k-lumpable, if and only if the lumped process is a kth-order Markov chain for each starting distribution of the original Markov chain. We characterise strong k-lumpability via tightness of stationary entropic bounds. In the sparse setting, we give sufficient conditions on the lumping to both preserve the entropy rate and be strongly k-lumpable.


2005 ◽  
Vol 37 (4) ◽  
pp. 1015-1034 ◽  
Author(s):  
Saul D. Jacka ◽  
Zorana Lazic ◽  
Jon Warren

Let (Xt)t≥0 be a continuous-time irreducible Markov chain on a finite state space E, let v be a map v: E→ℝ\{0}, and let (φt)t≥0 be an additive functional defined by φt=∫0tv(Xs)d s. We consider the case in which the process (φt)t≥0 is oscillating and that in which (φt)t≥0 has a negative drift. In each of these cases, we condition the process (Xt,φt)t≥0 on the event that (φt)t≥0 is nonnegative until time T and prove weak convergence of the conditioned process as T→∞.


2020 ◽  
Vol 24 ◽  
pp. 718-738
Author(s):  
Thi Phuong Thuy Vo

The discovery of the “hidden population”, whose size and membership are unknown, is made possible by assuming that its members are connected in a social network by their relationships. We explore these groups by a chain-referral sampling (CRS) method, where participants recommend the people they know. This leads to the study of a Markov chain on a random graph where vertices represent individuals and edges connecting any two nodes describe the relationships between corresponding people. We are interested in the study of CRS process on the stochastic block model (SBM), which extends the well-known Erdös-Rényi graphs to populations partitioned into communities. The SBM considered here is characterized by a number of vertices N, a number of communities (blocks) m, proportion of each community π = (π1, …, πm) and a pattern for connection between blocks P = (λkl∕N)(k,l)∈{1,…,m}2. In this paper, we give a precise description of the dynamic of CRS process in discrete time on an SBM. The difficulty lies in handling the heterogeneity of the graph. We prove that when the population’s size is large, the normalized stochastic process of the referral chain behaves like a deterministic curve which is the unique solution of a system of ODEs.


Sign in / Sign up

Export Citation Format

Share Document