Analysis of a counting process associated with a semi-Markov process: number of entries into a subset of state space

1987 ◽  
Vol 19 (4) ◽  
pp. 767-783 ◽  
Author(s):  
Yasushi Masuda ◽  
Ushio Sumita

Let N(t) be a finite semi-Markov process on 𝒩 and let X(t) be the associated age process. Of interest is the counting process M(t) for transitions of the semi-Markov process from a subset G of 𝒩 to another subset B where 𝒩 = B ∪ G and B ∩ G = ∅. By studying the trivariate process Y(t) =[N(t), M(t), X(t)] in its state space, new transform results are derived. By taking M(t) as a marginal process of Y(t), the Laplace transform generating function of M(t) is then obtained. Furthermore, this result is recaptured in the context of first-passage times of the semi-Markov process, providing a simple probabilistic interpretation. The asymptotic behavior of the moments of M(t) as t → ∞ is also discussed. In particular, an asymptotic expansion for E[M(t)] and the limit for Var [M(t)]/t as t → ∞ are given explicitly.


1987 ◽  
Vol 19 (04) ◽  
pp. 767-783 ◽  
Author(s):  
Yasushi Masuda ◽  
Ushio Sumita

Let N(t) be a finite semi-Markov process on 𝒩 and let X(t) be the associated age process. Of interest is the counting process M(t) for transitions of the semi-Markov process from a subset G of 𝒩 to another subset B where 𝒩 = B ∪ G and B ∩ G = ∅. By studying the trivariate process Y(t) =[N(t), M(t), X(t)] in its state space, new transform results are derived. By taking M(t) as a marginal process of Y(t), the Laplace transform generating function of M(t) is then obtained. Furthermore, this result is recaptured in the context of first-passage times of the semi-Markov process, providing a simple probabilistic interpretation. The asymptotic behavior of the moments of M(t) as t → ∞ is also discussed. In particular, an asymptotic expansion for E[M(t)] and the limit for Var [M(t)]/t as t → ∞ are given explicitly.



1993 ◽  
Vol 30 (3) ◽  
pp. 548-560 ◽  
Author(s):  
Yasushi Masuda

The main objective of this paper is to investigate the conditional behavior of the multivariate reward process given the number of certain signals where the underlying system is described by a semi-Markov process and the signal is defined by a counting process. To this end, we study the joint behavior of the multivariate reward process and the multivariate counting process in detail. We derive transform results as well as the corresponding real domain expressions, thus providing clear probabilistic interpretation.



1993 ◽  
Vol 30 (03) ◽  
pp. 548-560 ◽  
Author(s):  
Yasushi Masuda

The main objective of this paper is to investigate the conditional behavior of the multivariate reward process given the number of certain signals where the underlying system is described by a semi-Markov process and the signal is defined by a counting process. To this end, we study the joint behavior of the multivariate reward process and the multivariate counting process in detail. We derive transform results as well as the corresponding real domain expressions, thus providing clear probabilistic interpretation.



1989 ◽  
Vol 3 (1) ◽  
pp. 77-88 ◽  
Author(s):  
Joseph Abate ◽  
Ward Whitt

The distribution of upward first passage times in skip-free Markov chains can be expressed solely in terms of the eigenvalues in the spectral representation, without performing a separate calculation to determine the eigenvectors. We provide insight into this result and skip-free Markov chains more generally by showing that part of the spectral theory developed for birth-and-death processes extends to skip-free chains. We show that the eigenvalues and eigenvectors of skip-free chains can be characterized in terms of recursively defined polynomials. Moreover, the Laplace transform of the upward first passage time from 0 to n is the reciprocal of the nth polynomial. This simple relationship holds because the Laplace transforms of the first passage times satisfy the same recursion as the polynomials except for a normalization.



Mathematics ◽  
2020 ◽  
Vol 8 (11) ◽  
pp. 1988
Author(s):  
Zbigniew Palmowski

In this paper, I analyze the distributional properties of the busy period in an on-off fluid queue and the first passage time in a fluid queue driven by a finite state Markov process. In particular, I show that the first passage time has a IFR distribution and the busy period in the Anick-Mitra-Sondhi model has a DFR distribution.



Author(s):  
R. S. MacKay ◽  
J. D. Robinson

A Markov flow is a stationary measure, with the associated flows and mean first passage times, for a continuous-time regular jump homogeneous semi-Markov process on a discrete state space. Nodes in the state space can be eliminated to produce a smaller Markov flow which is a factor of the original one. Some improvements to the elimination methods of Wales are given. The main contribution of the paper is to present an alternative, namely a method to aggregate groups of nodes to produce a factor. The method can be iterated to make hierarchical aggregation schemes. The potential benefits are efficient computation, including recomputation to take into account local changes, and insights into the macroscopic behaviour. This article is part of the theme issue ‘Hilbert’s sixth problem’.



1997 ◽  
Vol 34 (1) ◽  
pp. 1-13 ◽  
Author(s):  
Haijun Li ◽  
Moshe Shaked

Using a matrix approach we discuss the first-passage time of a Markov process to exceed a given threshold or for the maximal increment of this process to pass a certain critical value. Conditions under which this first-passage time possesses various ageing properties are studied. Some results previously obtained by Li and Shaked (1995) are extended.



1970 ◽  
Vol 7 (02) ◽  
pp. 388-399 ◽  
Author(s):  
C. K. Cheong

Our main concern in this paper is the convergence, as t → ∞, of the quantities i, j ∈ E; where Pij (t) is the transition probability of a semi-Markov process whose state space E is irreducible but not closed (i.e., escape from E is possible), and rj is the probability of eventual escape from E conditional on the initial state being i. The theorems proved here generalize some results of Seneta and Vere-Jones ([8] and [11]) for Markov processes.



Sign in / Sign up

Export Citation Format

Share Document