irreducible markov chain
Recently Published Documents


TOTAL DOCUMENTS

50
(FIVE YEARS 4)

H-INDEX

7
(FIVE YEARS 0)

Author(s):  
William Lippitt ◽  
Sunder Sethuraman

Recently, a ‘Markovian stick-breaking’ process which generalizes the Dirichlet process ( μ , θ ) (\mu , \theta ) with respect to a discrete base space X \mathfrak {X} was introduced. In particular, a sample from from the ‘Markovian stick-breaking’ processs may be represented in stick-breaking form ∑ i ≥ 1 P i δ T i \sum _{i\geq 1} P_i \delta _{T_i} where { T i } \{T_i\} is a stationary, irreducible Markov chain on X \mathfrak {X} with stationary distribution μ \mu , instead of i.i.d. { T i } \{T_i\} each distributed as μ \mu as in the Dirichlet case, and { P i } \{P_i\} is a GEM ( θ ) (\theta ) residual allocation sequence. Although the previous motivation was to relate these Markovian stick-breaking processes to empirical distributional limits of types of simulated annealing chains, these processes may also be thought of as a class of priors in statistical problems. The aim of this work in this context is to identify the posterior distribution and to explore the role of the Markovian structure of { T i } \{T_i\} in some inference test cases.


Markov-modulated linear regression model is a special case of the Markov-additive process (𝒀, 𝑱) = {(𝒀(𝒕), 𝑱(𝒕)), 𝒕 ≥ 𝟎}, where component J is called Markov, and component Y is additive and described by a linear regression. The component J is a continuous-time homogeneous irreducible Markov chain with the known transition intensities between the states. Usually this Markov component is called the external environment or background process. Unknown regression coefficients depend on external environment state, but regressors remain constant. This research considers the case, when the Markov property is not satisfied, namely, the sojourn time in each state is not exponentially distributed. Estimation procedure for unknown model parameters is described when it’s possible to represent transition intensities as a convolution of exponential densities. An efficiency of such an approach is evaluated by a simulation.


2019 ◽  
Vol 56 (2) ◽  
pp. 558-573
Author(s):  
C. Houdré ◽  
G. Kerchev

AbstractLet (X, Y) = (Xn, Yn)n≥1 be the output process generated by a hidden chain Z = (Zn)n≥1, where Z is a finite-state, aperiodic, time homogeneous, and irreducible Markov chain. Let LCn be the length of the longest common subsequences of X1,..., Xn and Y1,..., Yn. Under a mixing hypothesis, a rate of convergence result is obtained for E[LCn]/n.


2014 ◽  
Vol 51 (4) ◽  
pp. 1114-1132 ◽  
Author(s):  
Bernhard C. Geiger ◽  
Christoph Temmel

A lumping of a Markov chain is a coordinatewise projection of the chain. We characterise the entropy rate preservation of a lumping of an aperiodic and irreducible Markov chain on a finite state space by the random growth rate of the cardinality of the realisable preimage of a finite-length trajectory of the lumped chain and by the information needed to reconstruct original trajectories from their lumped images. Both are purely combinatorial criteria, depending only on the transition graph of the Markov chain and the lumping function. A lumping is strongly k-lumpable, if and only if the lumped process is a kth-order Markov chain for each starting distribution of the original Markov chain. We characterise strong k-lumpability via tightness of stationary entropic bounds. In the sparse setting, we give sufficient conditions on the lumping to both preserve the entropy rate and be strongly k-lumpable.


2014 ◽  
Vol 51 (04) ◽  
pp. 1114-1132 ◽  
Author(s):  
Bernhard C. Geiger ◽  
Christoph Temmel

A lumping of a Markov chain is a coordinatewise projection of the chain. We characterise the entropy rate preservation of a lumping of an aperiodic and irreducible Markov chain on a finite state space by the random growth rate of the cardinality of the realisable preimage of a finite-length trajectory of the lumped chain and by the information needed to reconstruct original trajectories from their lumped images. Both are purely combinatorial criteria, depending only on the transition graph of the Markov chain and the lumping function. A lumping is strongly k-lumpable, if and only if the lumped process is a kth-order Markov chain for each starting distribution of the original Markov chain. We characterise strong k-lumpability via tightness of stationary entropic bounds. In the sparse setting, we give sufficient conditions on the lumping to both preserve the entropy rate and be strongly k-lumpable.


2013 ◽  
Vol 2013 ◽  
pp. 1-7 ◽  
Author(s):  
Bing-Yuan Pu ◽  
Ting-Zhu Huang ◽  
Chun Wen

This paper presents a class of new accelerated restarted GMRES method for calculating the stationary probability vector of an irreducible Markov chain. We focus on the mechanism of this new hybrid method by showing how to periodically combine the GMRES and vector extrapolation method into a much efficient one for improving the convergence rate in Markov chain problems. Numerical experiments are carried out to demonstrate the efficiency of our new algorithm on several typical Markov chain problems.


Sign in / Sign up

Export Citation Format

Share Document