stationary distributions
Recently Published Documents


TOTAL DOCUMENTS

474
(FIVE YEARS 59)

H-INDEX

37
(FIVE YEARS 2)

2022 ◽  
pp. 1-47
Author(s):  
Amarjit Budhiraja ◽  
Nicolas Fraiman ◽  
Adam Waterbury

Abstract We consider a collection of Markov chains that model the evolution of multitype biological populations. The state space of the chains is the positive orthant, and the boundary of the orthant is the absorbing state for the Markov chain and represents the extinction states of different population types. We are interested in the long-term behavior of the Markov chain away from extinction, under a small noise scaling. Under this scaling, the trajectory of the Markov process over any compact interval converges in distribution to the solution of an ordinary differential equation (ODE) evolving in the positive orthant. We study the asymptotic behavior of the quasi-stationary distributions (QSD) in this scaling regime. Our main result shows that, under conditions, the limit points of the QSD are supported on the union of interior attractors of the flow determined by the ODE. We also give lower bounds on expected extinction times which scale exponentially with the system size. Results of this type when the deterministic dynamical system obtained under the scaling limit is given by a discrete-time evolution equation and the dynamics are essentially in a compact space (namely, the one-step map is a bounded function) have been studied by Faure and Schreiber (2014). Our results extend these to a setting of an unbounded state space and continuous-time dynamics. The proofs rely on uniform large deviation results for small noise stochastic dynamical systems and methods from the theory of continuous-time dynamical systems. In general, QSD for Markov chains with absorbing states and unbounded state spaces may not exist. We study one basic family of binomial-Poisson models in the positive orthant where one can use Lyapunov function methods to establish existence of QSD and also to argue the tightness of the QSD of the scaled sequence of Markov chains. The results from the first part are then used to characterize the support of limit points of this sequence of QSD.


Author(s):  
Amarjit Budhiraja ◽  
Nicolas Fraiman ◽  
Adam Waterbury

We propose two numerical schemes for approximating quasi-stationary distributions (QSD) of finite state Markov chains with absorbing states. Both schemes are described in terms of interacting chains where the interaction is given in terms of the total time occupation measure of all particles in the system and has the impact of reinforcing transitions, in an appropriate fashion, to states where the collection of particles has spent more time. The schemes can be viewed as combining the key features of the two basic simulation-based methods for approximating QSD originating from the works of Fleming and Viot (1979) and  Aldous, Flannery and Palacios (1998), respectively. The key difference between the two schemes studied here is that in the first method one starts with $a(n)$ particles at time $0$ and number of particles stays constant over time whereas in the second method we start with one particle and at most one particle is added at each time instant in such a manner that there are $a(n)$ particles at time $n$. We prove almost sure convergence to the unique QSD and establish Central Limit Theorems for the two schemes under the key assumption that $a(n)=o(n)$. Exploratory numerical results are presented to illustrate the performance.


Author(s):  
Виталий Николаевич Соболев ◽  
Александр Евгеньевич Кондратенко

В статье рассматриваются стационарные распределения числа требований в системах массового обслуживания $M_{\lambda}|G|n|\infty$ и $GI_{\lambda}^{\nu}|M_{\mu}|1|\infty$, и показывается, что введение в данные системы массового обслуживания вспомогательных распределений с понятным вероятностным смыслом вместе с их производящими функциями позволяет упростить как доказательство так и его восприятие, а также приводит к новой записи полученных результатов. В первой системе рассматривается усечённое распределение искомого стационарного распределения для вложенной цепи Маркова. Данное усечение связано с количеством каналов $n$ и описывает вероятностные веса состояний системы, когда существует хотя бы один незанятый канал. Во второй системе для описания результатов используется распределение, связанное с распределением количества заявок во входящей группе требований: определяются вероятности хвостов описанного распределения, а потом для получения вспомогательного вероятностного распределения берётся их удельный вес между собой. This paper deals with two queuing system: $M_{\lambda}|G|n|\infty$ and $GI_{\lambda}^{\nu}|M_{\mu}|1|\infty$. The purpose is to find the steady-state results in terms of the probability-generating functions.


2021 ◽  
Vol 82 (7) ◽  
Author(s):  
Linard Hoessly

AbstractWe examine reaction networks (CRNs) through their associated continuous-time Markov processes. Studying the dynamics of such networks is in general hard, both analytically and by simulation. In particular, stationary distributions of stochastic reaction networks are only known in some cases. We analyze class properties of the underlying continuous-time Markov chain of CRNs under the operation of join and examine conditions such that the form of the stationary distributions of a CRN is derived from the parts of the decomposed CRNs. The conditions can be easily checked in examples and allow recursive application. The theory developed enables sequential decomposition of the Markov processes and calculations of stationary distributions. Since the class of processes expressible through such networks is big and only few assumptions are made, the principle also applies to other stochastic models. We give examples of interest from CRN theory to highlight the decomposition.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Hyukpyo Hong ◽  
Jinsu Kim ◽  
M. Ali Al-Radhawi ◽  
Eduardo D. Sontag ◽  
Jae Kyoung Kim

AbstractLong-term behaviors of biochemical reaction networks (BRNs) are described by steady states in deterministic models and stationary distributions in stochastic models. Unlike deterministic steady states, stationary distributions capturing inherent fluctuations of reactions are extremely difficult to derive analytically due to the curse of dimensionality. Here, we develop a method to derive analytic stationary distributions from deterministic steady states by transforming BRNs to have a special dynamic property, called complex balancing. Specifically, we merge nodes and edges of BRNs to match in- and out-flows of each node. This allows us to derive the stationary distributions of a large class of BRNs, including autophosphorylation networks of EGFR, PAK1, and Aurora B kinase and a genetic toggle switch. This reveals the unique properties of their stochastic dynamics such as robustness, sensitivity, and multi-modality. Importantly, we provide a user-friendly computational package, CASTANET, that automatically derives symbolic expressions of the stationary distributions of BRNs to understand their long-term stochasticity.


Sign in / Sign up

Export Citation Format

Share Document