Stochastic matrix metapopulation models with fast migration: Re-scaling survival to the fast scale

2020 ◽  
Vol 418 ◽  
pp. 108829
Author(s):  
Luis Sanz ◽  
Rafael Bravo de la Parra
1991 ◽  
Vol 28 (1) ◽  
pp. 96-103 ◽  
Author(s):  
Daniel P. Heyman

We are given a Markov chain with states 0, 1, 2, ···. We want to get a numerical approximation of the steady-state balance equations. To do this, we truncate the chain, keeping the first n states, make the resulting matrix stochastic in some convenient way, and solve the finite system. The purpose of this paper is to provide some sufficient conditions that imply that as n tends to infinity, the stationary distributions of the truncated chains converge to the stationary distribution of the given chain. Our approach is completely probabilistic, and our conditions are given in probabilistic terms. We illustrate how to verify these conditions with five examples.


2017 ◽  
Vol 100 ◽  
pp. 1-7 ◽  
Author(s):  
Éder A. Gubiani ◽  
Sidinei M. Thomaz ◽  
Luis M. Bini ◽  
Pitágoras A. Piana

2002 ◽  
Vol 218 (3) ◽  
pp. 273-288 ◽  
Author(s):  
ABDUL-AZIZ YAKUBU ◽  
CARLOS CASTILLO-CHAVEZ

2018 ◽  
Vol 14 (1) ◽  
pp. 7540-7559
Author(s):  
MI lOS lAWA SOKO

Virtually every biological model utilising a random number generator is a Markov stochastic process. Numerical simulations of such processes are performed using stochastic or intensity matrices or kernels. Biologists, however, define stochastic processes in a slightly different way to how mathematicians typically do. A discrete-time discrete-value stochastic process may be defined by a function p : X0 × X → {f : Î¥ → [0, 1]}, where X is a set of states, X0 is a bounded subset of X, Î¥ is a subset of integers (here associated with discrete time), where the function p satisfies 0 < p(x, y)(t) < 1 and  EY p(x, y)(t) = 1. This definition generalizes a stochastic matrix. Although X0 is bounded, X may include every possible state and is often infinite. By interrupting the process whenever the state transitions into the X −X0 set, Markov stochastic processes defined this way may have non-quadratic stochastic matrices. Similar principle applies to intensity matrices, stochastic and intensity kernels resulting from considering many biological models as Markov stochastic processes. Class of such processes has important properties when considered from a point of view of theoretical mathematics. In particular, every process from this class may be simulated (hence they all exist in a physical sense) and has a well-defined probabilistic space associated with it.


2007 ◽  
Vol 3 (4) ◽  
pp. 276-282 ◽  
Author(s):  
Vittoria Colizza ◽  
Romualdo Pastor-Satorras ◽  
Alessandro Vespignani

2005 ◽  
Vol 862 ◽  
Author(s):  
C. Main ◽  
J. M. Marshall ◽  
S. Reynolds ◽  
M.J. Rose ◽  
R. Brüggemann

AbstractIn this paper we demonstrate a simple computational procedure for the simulation of transport in a disordered semiconductor in which both multi-trapping and hopping processes are occurring simultaneously. We base the simulation on earlier work on hopping transport, which used a Monte-Carlo method. Using the same model concepts, we now employ a stochastic matrix approach to speed computation, and include also multi-trapping transitions between localised and extended states. We use the simulation to study the relative contributions of extended state conduction (with multi-trapping) and hopping conduction (via localised states) to transient photocurrents, for various distributions of localised gap states, and as a function of temperature. The implications of our findings for the interpretation of transient photocurrents are examined.


Sign in / Sign up

Export Citation Format

Share Document