Markov population processes

1969 ◽  
Vol 6 (01) ◽  
pp. 1-18 ◽  
Author(s):  
J.F.C. Kingman

Summary The processes of the title have frequently been used to represent situations involving numbers of individuals in different categories or colonies. In such processes the state at any time is represented by the vector n = (n 1, n 2, …, nk ), where nt is the number of individuals in the ith colony, and the random evolution of n is supposed to be that of a continuous-time Markov chain. The jumps of the chain may be of three types, corresponding to the arrival of a new individual, the departure of an existing one, or the transfer of an individual from one colony to another.

1969 ◽  
Vol 6 (1) ◽  
pp. 1-18 ◽  
Author(s):  
J.F.C. Kingman

SummaryThe processes of the title have frequently been used to represent situations involving numbers of individuals in different categories or colonies. In such processes the state at any time is represented by the vector n = (n1, n2, …, nk), where nt is the number of individuals in the ith colony, and the random evolution of n is supposed to be that of a continuous-time Markov chain. The jumps of the chain may be of three types, corresponding to the arrival of a new individual, the departure of an existing one, or the transfer of an individual from one colony to another.


2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
Alexander N. Dudin ◽  
Olga S. Dudina

A multiserver queueing system, the dynamics of which depends on the state of some external continuous-time Markov chain (random environment, RE), is considered. Change of the state of the RE may cause variation of the parameters of the arrival process, the service process, the number of available servers, and the available buffer capacity, as well as the behavior of customers. Evolution of the system states is described by the multidimensional continuous-time Markov chain. The generator of this Markov chain is derived. The ergodicity condition is presented. Expressions for the key performance measures are given. Numerical results illustrating the behavior of the system and showing possibility of formulation and solution of optimization problems are provided. The importance of the account of correlation in the arrival processes is numerically illustrated.


1988 ◽  
Vol 25 (4) ◽  
pp. 808-814 ◽  
Author(s):  
Keith N. Crank

This paper presents a method of approximating the state probabilities for a continuous-time Markov chain. This is done by constructing a right-shift process and then solving the Kolmogorov system of differential equations recursively. By solving a finite number of the differential equations, it is possible to obtain the state probabilities to any degree of accuracy over any finite time interval.


2020 ◽  
Vol 21 (2) ◽  
pp. 181-216 ◽  
Author(s):  
Jean Roch Donsimoni ◽  
René Glawion ◽  
Bodo Plachter ◽  
Klaus Wälde

AbstractWe model the evolution of the number of individuals reported sick with COVID-19 in Germany. Our theoretical framework builds on a continuous time Markov chain with four states: healthy without infection, sick, healthy after recovery or despite infection but without symptoms, and deceased. Our quantitative solution matches the number of sick individuals up to the most recent observation and ends with a share of sick individuals following from infection rates and sickness probabilities. We employ this framework to study inter alia the expected peak of the number of sick individuals in Germany in a scenario without public regulation of social contacts. We also study the effects of public regulations. For all scenarios we report the expected end date of the CoV-2 epidemic.


2015 ◽  
Vol 32 (3-4) ◽  
pp. 159-176
Author(s):  
Nicole Bäuerle ◽  
Igor Gilitschenski ◽  
Uwe Hanebeck

Abstract We consider a Hidden Markov Model (HMM) where the integrated continuous-time Markov chain can be observed at discrete time points perturbed by a Brownian motion. The aim is to derive a filter for the underlying continuous-time Markov chain. The recursion formula for the discrete-time filter is easy to derive, however involves densities which are very hard to obtain. In this paper we derive exact formulas for the necessary densities in the case the state space of the HMM consists of two elements only. This is done by relating the underlying integrated continuous-time Markov chain to the so-called asymmetric telegraph process and by using recent results on this process. In case the state space consists of more than two elements we present three different ways to approximate the densities for the filter. The first approach is based on the continuous filter problem. The second approach is to derive a PDE for the densities and solve it numerically. The third approach is a crude discrete time approximation of the Markov chain. All three approaches are compared in a numerical study.


1994 ◽  
Vol 26 (04) ◽  
pp. 919-946 ◽  
Author(s):  
Frank Ball ◽  
Robin K. Milne ◽  
Geoffrey F. Yeo

We study a bivariate stochastic process {X(t)} = Z(t))}, where {XE (t)} is a continuous-time Markov chain describing the environment and {Z(t)} is the process of interest. In the context which motivated this study, {Z(t)} models the gating behaviour of a single ion channel. It is assumed that given {XE (t)}, the channel process {Z(t)} is a continuous-time Markov chain with infinitesimal generator at time t dependent on XE (t), and that the environment process {XE {t)} is not dependent on {Z(t)}. We derive necessary and sufficient conditions for {X(t)} to be time reversible, showing that then its equilibrium distribution has a product form which reflects independence of the state of the environment and the state of the channel. In the special case when the environment controls the speed of the channel process, we derive transition probabilities and sojourn time distributions for {Z(t)} by exploiting connections with Markov reward processes. Some of these results are extended to a stationary environment. Applications to problems arising in modelling multiple ion channel systems are discussed. In particular, we present ways in which a multichannel model in a random environment does and does not exhibit behaviour identical to a corresponding model based on independent and identically distributed channels.


1994 ◽  
Vol 26 (4) ◽  
pp. 919-946 ◽  
Author(s):  
Frank Ball ◽  
Robin K. Milne ◽  
Geoffrey F. Yeo

We study a bivariate stochastic process {X(t)} = Z(t))}, where {XE(t)} is a continuous-time Markov chain describing the environment and {Z(t)} is the process of interest. In the context which motivated this study, {Z(t)} models the gating behaviour of a single ion channel. It is assumed that given {XE(t)}, the channel process {Z(t)} is a continuous-time Markov chain with infinitesimal generator at time t dependent on XE(t), and that the environment process {XE{t)} is not dependent on {Z(t)}. We derive necessary and sufficient conditions for {X(t)} to be time reversible, showing that then its equilibrium distribution has a product form which reflects independence of the state of the environment and the state of the channel. In the special case when the environment controls the speed of the channel process, we derive transition probabilities and sojourn time distributions for {Z(t)} by exploiting connections with Markov reward processes. Some of these results are extended to a stationary environment. Applications to problems arising in modelling multiple ion channel systems are discussed. In particular, we present ways in which a multichannel model in a random environment does and does not exhibit behaviour identical to a corresponding model based on independent and identically distributed channels.


1988 ◽  
Vol 25 (04) ◽  
pp. 808-814 ◽  
Author(s):  
Keith N. Crank

This paper presents a method of approximating the state probabilities for a continuous-time Markov chain. This is done by constructing a right-shift process and then solving the Kolmogorov system of differential equations recursively. By solving a finite number of the differential equations, it is possible to obtain the state probabilities to any degree of accuracy over any finite time interval.


Sign in / Sign up

Export Citation Format

Share Document