Continuous-time Markov chains in a random environment, with applications to ion channel modelling

1994 ◽  
Vol 26 (04) ◽  
pp. 919-946 ◽  
Author(s):  
Frank Ball ◽  
Robin K. Milne ◽  
Geoffrey F. Yeo

We study a bivariate stochastic process {X(t)} = Z(t))}, where {XE (t)} is a continuous-time Markov chain describing the environment and {Z(t)} is the process of interest. In the context which motivated this study, {Z(t)} models the gating behaviour of a single ion channel. It is assumed that given {XE (t)}, the channel process {Z(t)} is a continuous-time Markov chain with infinitesimal generator at time t dependent on XE (t), and that the environment process {XE {t)} is not dependent on {Z(t)}. We derive necessary and sufficient conditions for {X(t)} to be time reversible, showing that then its equilibrium distribution has a product form which reflects independence of the state of the environment and the state of the channel. In the special case when the environment controls the speed of the channel process, we derive transition probabilities and sojourn time distributions for {Z(t)} by exploiting connections with Markov reward processes. Some of these results are extended to a stationary environment. Applications to problems arising in modelling multiple ion channel systems are discussed. In particular, we present ways in which a multichannel model in a random environment does and does not exhibit behaviour identical to a corresponding model based on independent and identically distributed channels.

1994 ◽  
Vol 26 (4) ◽  
pp. 919-946 ◽  
Author(s):  
Frank Ball ◽  
Robin K. Milne ◽  
Geoffrey F. Yeo

We study a bivariate stochastic process {X(t)} = Z(t))}, where {XE(t)} is a continuous-time Markov chain describing the environment and {Z(t)} is the process of interest. In the context which motivated this study, {Z(t)} models the gating behaviour of a single ion channel. It is assumed that given {XE(t)}, the channel process {Z(t)} is a continuous-time Markov chain with infinitesimal generator at time t dependent on XE(t), and that the environment process {XE{t)} is not dependent on {Z(t)}. We derive necessary and sufficient conditions for {X(t)} to be time reversible, showing that then its equilibrium distribution has a product form which reflects independence of the state of the environment and the state of the channel. In the special case when the environment controls the speed of the channel process, we derive transition probabilities and sojourn time distributions for {Z(t)} by exploiting connections with Markov reward processes. Some of these results are extended to a stationary environment. Applications to problems arising in modelling multiple ion channel systems are discussed. In particular, we present ways in which a multichannel model in a random environment does and does not exhibit behaviour identical to a corresponding model based on independent and identically distributed channels.


1993 ◽  
Vol 30 (3) ◽  
pp. 518-528 ◽  
Author(s):  
Frank Ball ◽  
Geoffrey F. Yeo

We consider lumpability for continuous-time Markov chains and provide a simple probabilistic proof of necessary and sufficient conditions for strong lumpability, valid in circumstances not covered by known theory. We also consider the following marginalisability problem. Let {X{t)} = {(X1(t), X2(t), · ··, Xm(t))} be a continuous-time Markov chain. Under what conditions are the marginal processes {X1(t)}, {X2(t)}, · ··, {Xm(t)} also continuous-time Markov chains? We show that this is related to lumpability and, if no two of the marginal processes can jump simultaneously, then they are continuous-time Markov chains if and only if they are mutually independent. Applications to ion channel modelling and birth–death processes are discussed briefly.


1993 ◽  
Vol 30 (03) ◽  
pp. 518-528 ◽  
Author(s):  
Frank Ball ◽  
Geoffrey F. Yeo

We consider lumpability for continuous-time Markov chains and provide a simple probabilistic proof of necessary and sufficient conditions for strong lumpability, valid in circumstances not covered by known theory. We also consider the following marginalisability problem. Let {X{t)} = {(X 1(t), X 2(t), · ··, Xm (t))} be a continuous-time Markov chain. Under what conditions are the marginal processes {X 1(t)}, {X 2(t)}, · ··, {Xm (t)} also continuous-time Markov chains? We show that this is related to lumpability and, if no two of the marginal processes can jump simultaneously, then they are continuous-time Markov chains if and only if they are mutually independent. Applications to ion channel modelling and birth–death processes are discussed briefly.


1997 ◽  
Vol 29 (01) ◽  
pp. 92-113 ◽  
Author(s):  
Frank Ball ◽  
Sue Davies

The gating mechanism of a single ion channel is usually modelled by a continuous-time Markov chain with a finite state space. The state space is partitioned into two classes, termed ‘open’ and ‘closed’, and it is possible to observe only which class the process is in. In many experiments channel openings occur in bursts. This can be modelled by partitioning the closed states further into ‘short-lived’ and ‘long-lived’ closed states, and defining a burst of openings to be a succession of open sojourns separated by closed sojourns that are entirely within the short-lived closed states. There is also evidence that bursts of openings are themselves grouped together into clusters. This clustering of bursts can be described by the ratio of the variance Var (N(t)) to the mean[N(t)] of the number of bursts of openings commencing in (0, t]. In this paper two methods of determining Var (N(t))/[N(t)] and limt→∝Var (N(t))/[N(t)] are developed, the first via an embedded Markov renewal process and the second via an augmented continuous-time Markov chain. The theory is illustrated by a numerical study of a molecular stochastic model of the nicotinic acetylcholine receptor. Extensions to semi-Markov models of ion channel gating and the incorporation of time interval omission are briefly discussed.


2000 ◽  
Vol 37 (1) ◽  
pp. 45-63 ◽  
Author(s):  
N. M. van Dijk ◽  
H. Korezlioglu

This work presents an estimate of the error on a cumulative reward function until the entrance time of a continuous-time Markov chain into a set, when the infinitesimal generator of this chain is perturbed. The derivation of an error bound constitutes the first part of the paper while the second part deals with an application where the time until saturation is considered for a circuit switched network which starts from an empty state and which is also subject to possible failures.


1989 ◽  
Vol 26 (3) ◽  
pp. 643-648 ◽  
Author(s):  
A. I. Zeifman

We consider a non-homogeneous continuous-time Markov chain X(t) with countable state space. Definitions of uniform and strong quasi-ergodicity are introduced. The forward Kolmogorov system for X(t) is considered as a differential equation in the space of sequences l1. Sufficient conditions for uniform quasi-ergodicity are deduced from this equation. We consider conditions of uniform and strong ergodicity in the case of proportional intensities.


2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
Alexander N. Dudin ◽  
Olga S. Dudina

A multiserver queueing system, the dynamics of which depends on the state of some external continuous-time Markov chain (random environment, RE), is considered. Change of the state of the RE may cause variation of the parameters of the arrival process, the service process, the number of available servers, and the available buffer capacity, as well as the behavior of customers. Evolution of the system states is described by the multidimensional continuous-time Markov chain. The generator of this Markov chain is derived. The ergodicity condition is presented. Expressions for the key performance measures are given. Numerical results illustrating the behavior of the system and showing possibility of formulation and solution of optimization problems are provided. The importance of the account of correlation in the arrival processes is numerically illustrated.


1988 ◽  
Vol 2 (2) ◽  
pp. 267-268
Author(s):  
Sheldon M. Ross

In [1] an approach to approximate the transition probabilities and mean occupation times of a continuous-time Markov chain is presented. For the chain under consideration, let Pij(t) and Tij(t) denote respectively the probability that it is in state j at time t, and the total time spent in j by time t, in both cases conditional on the chain starting in state i. Also, let Y1,…, Yn be independent exponential random variables each with rate λ = n/t, which are also independent of the Markov chain.


2000 ◽  
Vol 37 (01) ◽  
pp. 45-63
Author(s):  
N. M. van Dijk ◽  
H. Korezlioglu

This work presents an estimate of the error on a cumulative reward function until the entrance time of a continuous-time Markov chain into a set, when the infinitesimal generator of this chain is perturbed. The derivation of an error bound constitutes the first part of the paper while the second part deals with an application where the time until saturation is considered for a circuit switched network which starts from an empty state and which is also subject to possible failures.


1988 ◽  
Vol 25 (4) ◽  
pp. 808-814 ◽  
Author(s):  
Keith N. Crank

This paper presents a method of approximating the state probabilities for a continuous-time Markov chain. This is done by constructing a right-shift process and then solving the Kolmogorov system of differential equations recursively. By solving a finite number of the differential equations, it is possible to obtain the state probabilities to any degree of accuracy over any finite time interval.


Sign in / Sign up

Export Citation Format

Share Document