Some distribution and moment formulae for the Markov renewal process

1970 ◽  
Vol 68 (1) ◽  
pp. 159-166 ◽  
Author(s):  
A. M. Kshirsagar ◽  
R. Wysocki

1. Introduction. A Markov Renewal Process (MRP) with m(<∞) states is one which records at each time t, the number of times a system visits each of the m states up to time t, if the system moves from state to state according to a Markov chain with transition probability matrix P0 = [pij] and if the time required for each successive move is a random variable whose distribution function (d.f.) depends on the two states between which the move is made. Thus, if the system moves from state i to state j, the holding time in the state i has Fij(x) as its d.f. (i, j = 1,2, …, m).

1991 ◽  
Vol 4 (4) ◽  
pp. 293-303
Author(s):  
P. Todorovic

Let {ξn} be a non-decreasing stochastically monotone Markov chain whose transition probability Q(.,.) has Q(x,{x})=β(x)>0 for some function β(.) that is non-decreasing with β(x)↑1 as x→+∞, and each Q(x,.) is non-atomic otherwise. A typical realization of {ξn} is a Markov renewal process {(Xn,Tn)}, where ξj=Xn, for Tn consecutive values of j, Tn geometric on {1,2,…} with parameter β(Xn). Conditions are given for Xn, to be relatively stable and for Tn to be weakly convergent.


2004 ◽  
Vol 36 (4) ◽  
pp. 1198-1211 ◽  
Author(s):  
James Ledoux

Let (φ(Xn))n be a function of a finite-state Markov chain (Xn)n. In this article, we investigate the conditions under which the random variables φ(n) have the same distribution as Yn (for every n), where (Yn)n is a Markov chain with fixed transition probability matrix. In other words, for a deterministic function φ, we investigate the conditions under which (Xn)n is weakly lumpable for the state vector. We show that the set of all probability distributions of X0, such that (Xn)n is weakly lumpable for the state vector, can be finitely generated. The connections between our definition of lumpability and the usual one (i.e. as the proportional dynamics property) are discussed.


2004 ◽  
Vol 36 (04) ◽  
pp. 1198-1211
Author(s):  
James Ledoux

Let (φ(X n )) n be a function of a finite-state Markov chain (X n ) n . In this article, we investigate the conditions under which the random variables φ( n ) have the same distribution as Y n (for every n), where (Y n ) n is a Markov chain with fixed transition probability matrix. In other words, for a deterministic function φ, we investigate the conditions under which (X n ) n is weakly lumpable for the state vector. We show that the set of all probability distributions of X 0, such that (X n ) n is weakly lumpable for the state vector, can be finitely generated. The connections between our definition of lumpability and the usual one (i.e. as the proportional dynamics property) are discussed.


Mathematics ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 55
Author(s):  
P.-C.G. Vassiliou

For a G-inhomogeneous semi-Markov chain and G-inhomogeneous Markov renewal processes, we study the change from real probability measure into a forward probability measure. We find the values of risky bonds using the forward probabilities that the bond will not default up to maturity time for both processes. It is established in the form of a theorem that the forward probability measure does not alter the semi Markov structure. In addition, foundation of a G-inhohomogeneous Markov renewal process is done and a theorem is provided where it is proved that the Markov renewal process is maintained under the forward probability measure. We show that for an inhomogeneous semi-Markov there are martingales that characterize it. We show that the same is true for a Markov renewal processes. We discuss in depth the calibration of the G-inhomogeneous semi-Markov chain model and propose an algorithm for it. We conclude with an application for risky bonds.


1987 ◽  
Vol 19 (03) ◽  
pp. 739-742 ◽  
Author(s):  
J. D. Biggins

If (non-overlapping) repeats of specified sequences of states in a Markov chain are considered, the result is a Markov renewal process. Formulae somewhat simpler than those given in Biggins and Cannings (1987) are derived which can be used to obtain the transition matrix and conditional mean sojourn times in this process.


1996 ◽  
Vol 33 (03) ◽  
pp. 623-629 ◽  
Author(s):  
Y. Quennel Zhao ◽  
Danielle Liu

Computationally, when we solve for the stationary probabilities for a countable-state Markov chain, the transition probability matrix of the Markov chain has to be truncated, in some way, into a finite matrix. Different augmentation methods might be valid such that the stationary probability distribution for the truncated Markov chain approaches that for the countable Markov chain as the truncation size gets large. In this paper, we prove that the censored (watched) Markov chain provides the best approximation in the sense that, for a given truncation size, the sum of errors is the minimum and show, by examples, that the method of augmenting the last column only is not always the best.


1972 ◽  
Vol 13 (4) ◽  
pp. 417-422 ◽  
Author(s):  
A. M. Kshirsagar ◽  
Y. P. Gupta

AbstractThe Laplace-Stieltjes Transform m(s) of the matrix renewal function M(t) of a Markov Renewal process is expanded in powers of the argument s, in this paper, by using a generalized inverse of the matrix I–P0, where P0 is the transition probability matrix of the imbedded Markov chain. This helps in obtaining the values of moments of any order of the number of renewals and also of the moments of the first passage times, for large values of t, the time. All the results of renewal theory are hidden under the Laplacian curtain and this expansion helps to lift this curtain at least for large values of t and is thus useful in predicting the number of renewals.


2018 ◽  
Vol 28 (5) ◽  
pp. 1552-1563 ◽  
Author(s):  
Tunny Sebastian ◽  
Visalakshi Jeyaseelan ◽  
Lakshmanan Jeyaseelan ◽  
Shalini Anandan ◽  
Sebastian George ◽  
...  

Hidden Markov models are stochastic models in which the observations are assumed to follow a mixture distribution, but the parameters of the components are governed by a Markov chain which is unobservable. The issues related to the estimation of Poisson-hidden Markov models in which the observations are coming from mixture of Poisson distributions and the parameters of the component Poisson distributions are governed by an m-state Markov chain with an unknown transition probability matrix are explained here. These methods were applied to the data on Vibrio cholerae counts reported every month for 11-year span at Christian Medical College, Vellore, India. Using Viterbi algorithm, the best estimate of the state sequence was obtained and hence the transition probability matrix. The mean passage time between the states were estimated. The 95% confidence interval for the mean passage time was estimated via Monte Carlo simulation. The three hidden states of the estimated Markov chain are labelled as ‘Low’, ‘Moderate’ and ‘High’ with the mean counts of 1.4, 6.6 and 20.2 and the estimated average duration of stay of 3, 3 and 4 months, respectively. Environmental risk factors were studied using Markov ordinal logistic regression analysis. No significant association was found between disease severity levels and climate components.


2019 ◽  
Vol 1 (2) ◽  
pp. 5-10
Author(s):  
Muhammad Azka

The problem proposed in this research is about the amount rainy day per a month at Balikpapan city and discretetime markov chain. The purpose is finding the probability of rainy day with the frequency rate of rainy at the next month if given the frequency rate of rainy at the prior month. The applied method in this research is classifying the amount of rainy day be three frequency levels, those are, high, medium, and low. If a month, the amount of rainy day is less than 11 then the frequency rate for the month is classified low, if a month, the amount of rainy day between 10 and 20, then it is classified medium and if it is more than 20, then it is classified high. The result is discrete-time markov chain represented with the transition probability matrix, and the transition diagram.


Sign in / Sign up

Export Citation Format

Share Document