A method for approximating the probability functions of a Markov chain

1988 ◽  
Vol 25 (4) ◽  
pp. 808-814 ◽  
Author(s):  
Keith N. Crank

This paper presents a method of approximating the state probabilities for a continuous-time Markov chain. This is done by constructing a right-shift process and then solving the Kolmogorov system of differential equations recursively. By solving a finite number of the differential equations, it is possible to obtain the state probabilities to any degree of accuracy over any finite time interval.

1988 ◽  
Vol 25 (04) ◽  
pp. 808-814 ◽  
Author(s):  
Keith N. Crank

This paper presents a method of approximating the state probabilities for a continuous-time Markov chain. This is done by constructing a right-shift process and then solving the Kolmogorov system of differential equations recursively. By solving a finite number of the differential equations, it is possible to obtain the state probabilities to any degree of accuracy over any finite time interval.


2020 ◽  
Vol 12 (2) ◽  
pp. 504-521
Author(s):  
T.V. Koval'chuk ◽  
V.V. Mogylova ◽  
O.M. Stanzhytskyi ◽  
T.V. Shovkoplyas

The problem of optimal control at finite time interval for a system of differential equations with impulse action at fixed moments of time as well as the corresponding averaged system of ordinary differential equations are considered. It is proved the existence of optimal control of exact and averaged problems. Also, it is established that optimal control of averaged problem realize the approximate optimal synthesis of exact problem. The main result of the article is a theorem, where it is proved that optimal contol of an averaged problem is almost optimal for exact problem. Substantiation of proximity of solutions of exact and averaged problems is obtained.


2014 ◽  
Vol 2014 ◽  
pp. 1-8
Author(s):  
Li Liang

This paper is concerned with the problem of finite-time boundedness for a class of delayed Markovian jumping neural networks with partly unknown transition probabilities. By introducing the appropriate stochastic Lyapunov-Krasovskii functional and the concept of stochastically finite-time stochastic boundedness for Markovian jumping neural networks, a new method is proposed to guarantee that the state trajectory remains in a bounded region of the state space over a prespecified finite-time interval. Finally, numerical examples are given to illustrate the effectiveness and reduced conservativeness of the proposed results.


1997 ◽  
Vol 29 (01) ◽  
pp. 92-113 ◽  
Author(s):  
Frank Ball ◽  
Sue Davies

The gating mechanism of a single ion channel is usually modelled by a continuous-time Markov chain with a finite state space. The state space is partitioned into two classes, termed ‘open’ and ‘closed’, and it is possible to observe only which class the process is in. In many experiments channel openings occur in bursts. This can be modelled by partitioning the closed states further into ‘short-lived’ and ‘long-lived’ closed states, and defining a burst of openings to be a succession of open sojourns separated by closed sojourns that are entirely within the short-lived closed states. There is also evidence that bursts of openings are themselves grouped together into clusters. This clustering of bursts can be described by the ratio of the variance Var (N(t)) to the mean[N(t)] of the number of bursts of openings commencing in (0, t]. In this paper two methods of determining Var (N(t))/[N(t)] and limt→∝Var (N(t))/[N(t)] are developed, the first via an embedded Markov renewal process and the second via an augmented continuous-time Markov chain. The theory is illustrated by a numerical study of a molecular stochastic model of the nicotinic acetylcholine receptor. Extensions to semi-Markov models of ion channel gating and the incorporation of time interval omission are briefly discussed.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Yan Qi ◽  
Shiyu Zhong ◽  
Zhiguo Yan

In this paper, the design of finite-time H2/H∞ controller for linear Itô stochastic Poisson systems is considered. First, the definition of finite-time H2/H∞ control is proposed, which considers the transient performance, H2 index, and H∞ index simultaneously in a predetermined finite-time interval. Then, the state feedback and observer-based finite-time H2/H∞ controllers are presented and some new sufficient conditions are obtained. Moreover, an algorithm is given to optimize H2 and H∞ index, simultaneously. Finally, a simulation example indicates the effectiveness of the results.


2014 ◽  
Vol 51 (1) ◽  
pp. 262-281
Author(s):  
Samuel N. Cohen

We consider backward stochastic differential equations in a setting where noise is generated by a countable state, continuous time Markov chain, and the terminal value is prescribed at a stopping time. We show that, given sufficient integrability of the stopping time and a growth bound on the terminal value and BSDE driver, these equations admit unique solutions satisfying the same growth bound (up to multiplication by a constant). This holds without assuming that the driver is monotone in y, that is, our results do not require that the terminal value be discounted at some uniform rate. We show that the conditions are satisfied for hitting times of states of the chain, and hence present some novel applications of the theory of these BSDEs.


2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
Alexander N. Dudin ◽  
Olga S. Dudina

A multiserver queueing system, the dynamics of which depends on the state of some external continuous-time Markov chain (random environment, RE), is considered. Change of the state of the RE may cause variation of the parameters of the arrival process, the service process, the number of available servers, and the available buffer capacity, as well as the behavior of customers. Evolution of the system states is described by the multidimensional continuous-time Markov chain. The generator of this Markov chain is derived. The ergodicity condition is presented. Expressions for the key performance measures are given. Numerical results illustrating the behavior of the system and showing possibility of formulation and solution of optimization problems are provided. The importance of the account of correlation in the arrival processes is numerically illustrated.


1981 ◽  
Vol 103 (4) ◽  
pp. 417-419 ◽  
Author(s):  
Bernard Friedland

The continuous-time Kalman filtering problem over a finite time interval can be made equivalent to a discrete-time filtering problem. The matrices in the latter are related to the submatrices of the transition matrix of a Hamiltonian system that corresponds to the continuous-time filtering problem.


2019 ◽  
Vol 51 (4) ◽  
pp. 967-993
Author(s):  
Jorge I. González Cázares ◽  
Aleksandar Mijatović ◽  
Gerónimo Uribe Bravo

AbstractWe exhibit an exact simulation algorithm for the supremum of a stable process over a finite time interval using dominated coupling from the past (DCFTP). We establish a novel perpetuity equation for the supremum (via the representation of the concave majorants of Lévy processes [27]) and use it to construct a Markov chain in the DCFTP algorithm. We prove that the number of steps taken backwards in time before the coalescence is detected is finite. We analyse the performance of the algorithm numerically (the code, written in Julia 1.0, is available on GitHub).


Sign in / Sign up

Export Citation Format

Share Document