On identifiability and order of continuous-time aggregated Markov chains, Markov-modulated Poisson processes, and phase-type distributions

1996 ◽  
Vol 33 (3) ◽  
pp. 640-653 ◽  
Author(s):  
Tobias Rydén

An aggregated Markov chain is a Markov chain for which some states cannot be distinguished from each other by the observer. In this paper we consider the identifiability problem for such processes in continuous time, i.e. the problem of determining whether two parameters induce identical laws for the observable process or not. We also study the order of a continuous-time aggregated Markov chain, which is the minimum number of states needed to represent it. In particular, we give a lower bound on the order. As a by-product, we obtain results of this kind also for Markov-modulated Poisson processes, i.e. doubly stochastic Poisson processes whose intensities are directed by continuous-time Markov chains, and phase-type distributions, which are hitting times in finite-state Markov chains.


1996 ◽  
Vol 33 (03) ◽  
pp. 640-653 ◽  
Author(s):  
Tobias Rydén

An aggregated Markov chain is a Markov chain for which some states cannot be distinguished from each other by the observer. In this paper we consider the identifiability problem for such processes in continuous time, i.e. the problem of determining whether two parameters induce identical laws for the observable process or not. We also study the order of a continuous-time aggregated Markov chain, which is the minimum number of states needed to represent it. In particular, we give a lower bound on the order. As a by-product, we obtain results of this kind also for Markov-modulated Poisson processes, i.e. doubly stochastic Poisson processes whose intensities are directed by continuous-time Markov chains, and phase-type distributions, which are hitting times in finite-state Markov chains.



2021 ◽  
Vol 58 (4) ◽  
pp. 880-889
Author(s):  
Qi-Ming He

AbstractWe consider a class of phase-type distributions (PH-distributions), to be called the MMPP class of PH-distributions, and find bounds of their mean and squared coefficient of variation (SCV). As an application, we have shown that the SCV of the event-stationary inter-event time for Markov modulated Poisson processes (MMPPs) is greater than or equal to unity, which answers an open problem for MMPPs. The results are useful for selecting proper PH-distributions and counting processes in stochastic modeling.



1981 ◽  
Vol 18 (3) ◽  
pp. 747-751
Author(s):  
Stig I. Rosenlund

For a time-homogeneous continuous-parameter Markov chain we show that as t → 0 the transition probability pn,j (t) is at least of order where r(n, j) is the minimum number of jumps needed for the chain to pass from n to j. If the intensities of passage are bounded over the set of states which can be reached from n via fewer than r(n, j) jumps, this is the exact order.



1982 ◽  
Vol 19 (3) ◽  
pp. 692-694 ◽  
Author(s):  
Mark Scott ◽  
Barry C. Arnold ◽  
Dean L. Isaacson

Characterizations of strong ergodicity for Markov chains using mean visit times have been found by several authors (Huang and Isaacson (1977), Isaacson and Arnold (1978)). In this paper a characterization of uniform strong ergodicity for a continuous-time non-homogeneous Markov chain is given. This extends the characterization, using mean visit times, that was given by Isaacson and Arnold.



1968 ◽  
Vol 5 (03) ◽  
pp. 669-678 ◽  
Author(s):  
Jozef L. Teugels

A general proposition is proved stating that the exponential ergodicity of a stationary Markov chain is preserved for derived Markov chains as defined by Cohen [2], [3]. An application to a certain type of continuous time Markov chains is included.



2019 ◽  
Vol 22 (08) ◽  
pp. 1950047 ◽  
Author(s):  
TAK KUEN SIU ◽  
ROBERT J. ELLIOTT

The hedging of a European-style contingent claim is studied in a continuous-time doubly Markov-modulated financial market, where the interest rate of a bond is modulated by an observable, continuous-time, finite-state, Markov chain and the appreciation rate of a risky share is modulated by a continuous-time, finite-state, hidden Markov chain. The first chain describes the evolution of credit ratings of the bond over time while the second chain models the evolution of the hidden state of an underlying economy over time. Stochastic flows of diffeomorphisms are used to derive some hedge quantities, or Greeks, for the claim. A mixed filter-based and regime-switching Black–Scholes partial differential equation is obtained governing the price of the claim. It will be shown that the delta hedge ratio process obtained from stochastic flows is a risk-minimizing, admissible mean-self-financing portfolio process. Both the first-order and second-order Greeks will be considered.



1994 ◽  
Vol 26 (04) ◽  
pp. 965-987 ◽  
Author(s):  
Raymond W. Yeung ◽  
Bhaskar Sengupta

We have two aims in this paper. First, we generalize the well-known theory of matrix-geometric methods of Neuts to more complicated Markov chains. Second, we use the theory to solve a last-come-first-served queue with a generalized preemptive resume (LCFS-GPR) discipline. The structure of the Markov chain considered in this paper is one in which one of the variables can take values in a countable set, which is arranged in the form of a tree. The other variable takes values from a finite set. Each node of the tree can branch out into d other nodes. The steady-state solution of this Markov chain has a matrix product-form, which can be expressed as a function of d matrices Rl,· ··, Rd. We then use this theory to solve a multiclass LCFS-GPR queue, in which the service times have PH-distributions and arrivals are according to the Markov modulated Poisson process. In this discipline, when a customer's service is preempted in phase j (due to a new arrival), the resumption of service at a later time could take place in a phase which depends on j. We also obtain a closed form solution for the stationary distribution of an LCFS-GPR queue when the arrivals are Poisson. This result generalizes the known result on a LCFS preemptive resume queue, which can be obtained from Kelly's symmetric queue.



1993 ◽  
Vol 30 (3) ◽  
pp. 518-528 ◽  
Author(s):  
Frank Ball ◽  
Geoffrey F. Yeo

We consider lumpability for continuous-time Markov chains and provide a simple probabilistic proof of necessary and sufficient conditions for strong lumpability, valid in circumstances not covered by known theory. We also consider the following marginalisability problem. Let {X{t)} = {(X1(t), X2(t), · ··, Xm(t))} be a continuous-time Markov chain. Under what conditions are the marginal processes {X1(t)}, {X2(t)}, · ··, {Xm(t)} also continuous-time Markov chains? We show that this is related to lumpability and, if no two of the marginal processes can jump simultaneously, then they are continuous-time Markov chains if and only if they are mutually independent. Applications to ion channel modelling and birth–death processes are discussed briefly.



Sign in / Sign up

Export Citation Format

Share Document