A SUCCESSIVE LUMPING PROCEDURE FOR A CLASS OF MARKOV CHAINS

2012 ◽  
Vol 26 (4) ◽  
pp. 483-508 ◽  
Author(s):  
Michael N. Katehakis ◽  
Laurens C. Smit

A class of Markov chains we call successively lumbaple is specified for which it is shown that the stationary probabilities can be obtained by successively computing the stationary probabilities of a propitiously constructed sequence of Markov chains. Each of the latter chains has a(typically much) smaller state space and this yields significant computational improvements. We discuss how the results for discrete time Markov chains extend to semi-Markov processes and continuous time Markov processes. Finally, we will study applications of successively lumbaple Markov chains to classical reliability and queueing models.

2003 ◽  
Vol 40 (2) ◽  
pp. 361-375 ◽  
Author(s):  
A. Irle

We consider the following ordering for stochastic processes as introduced by Irle and Gani (2001). A process (Yt)t is said to be slower in level crossing than a process (Zt)t if it takes (Yt)t stochastically longer than (Zt)t to exceed any given level. In Irle and Gani (2001), this ordering was investigated for Markov chains in discrete time. Here these results are carried over to semi-Markov processes with particular attention to birth-and-death processes and also to Wiener processes.


2003 ◽  
Vol 40 (02) ◽  
pp. 361-375 ◽  
Author(s):  
A. Irle

We consider the following ordering for stochastic processes as introduced by Irle and Gani (2001). A process (Y t ) t is said to be slower in level crossing than a process (Z t ) t if it takes (Y t ) t stochastically longer than (Z t ) t to exceed any given level. In Irle and Gani (2001), this ordering was investigated for Markov chains in discrete time. Here these results are carried over to semi-Markov processes with particular attention to birth-and-death processes and also to Wiener processes.


Mathematics ◽  
2021 ◽  
Vol 9 (13) ◽  
pp. 1496
Author(s):  
Manuel L. Esquível ◽  
Nadezhda P. Krasii ◽  
Gracinda R. Guerreiro

We address the problem of finding a natural continuous time Markov type process—in open populations—that best captures the information provided by an open Markov chain in discrete time which is usually the sole possible observation from data. Given the open discrete time Markov chain, we single out two main approaches: In the first one, we consider a calibration procedure of a continuous time Markov process using a transition matrix of a discrete time Markov chain and we show that, when the discrete time transition matrix is embeddable in a continuous time one, the calibration problem has optimal solutions. In the second approach, we consider semi-Markov processes—and open Markov schemes—and we propose a direct extension from the discrete time theory to the continuous time one by using a known structure representation result for semi-Markov processes that decomposes the process as a sum of terms given by the products of the random variables of a discrete time Markov chain by time functions built from an adequate increasing sequence of stopping times.


1967 ◽  
Vol 4 (1) ◽  
pp. 192-196 ◽  
Author(s):  
J. N. Darroch ◽  
E. Seneta

In a recent paper, the authors have discussed the concept of quasi-stationary distributions for absorbing Markov chains having a finite state space, with the further restriction of discrete time. The purpose of the present note is to summarize the analogous results when the time parameter is continuous.


2007 ◽  
Vol 39 (02) ◽  
pp. 360-384 ◽  
Author(s):  
Uğur Tuncay Alparslan ◽  
Gennady Samorodnitsky

We study the ruin probability where the claim sizes are modeled by a stationary ergodic symmetric α-stable process. We exploit the flow representation of such processes, and we consider the processes generated by conservative flows. We focus on two classes of conservative α-stable processes (one discrete-time and one continuous-time), and give results for the order of magnitude of the ruin probability as the initial capital goes to infinity. We also prove a solidarity property for null-recurrent Markov chains as an auxiliary result, which might be of independent interest.


2008 ◽  
Vol 28 (2) ◽  
pp. 355-375 ◽  
Author(s):  
Márcio das Chagas Moura ◽  
Enrique López Droguett

In this work it is proposed a model for the assessment of availability measure of fault tolerant systems based on the integration of continuous time semi-Markov processes and Bayesian belief networks. This integration results in a hybrid stochastic model that is able to represent the dynamic characteristics of a system as well as to deal with cause-effect relationships among external factors such as environmental and operational conditions. The hybrid model also allows for uncertainty propagation on the system availability. It is also proposed a numerical procedure for the solution of the state probability equations of semi-Markov processes described in terms of transition rates. The numerical procedure is based on the application of Laplace transforms that are inverted by the Gauss quadrature method known as Gauss Legendre. The hybrid model and numerical procedure are illustrated by means of an example of application in the context of fault tolerant systems.


Sign in / Sign up

Export Citation Format

Share Document