MONOTONICITY AND CONVEXITY OF SOME FUNCTIONS ASSOCIATED WITH DENUMERABLE MARKOV CHAINS AND THEIR APPLICATIONS TO QUEUING SYSTEMS

2005 ◽  
Vol 20 (1) ◽  
pp. 67-86 ◽  
Author(s):  
Hai-Bo Yu ◽  
Qi-Ming He ◽  
Hanqin Zhang

Motivated by various applications in queuing theory, this article is devoted to the monotonicity and convexity of some functions associated with discrete-time or continuous-time denumerable Markov chains. For the discrete-time case, conditions for the monotonicity and convexity of the functions are obtained by using the properties of stochastic dominance and monotone matrix. For the continuous-time case, by using the uniformization technique, similar results are obtained. As an application, the results are applied to analyze the monotonicity and convexity of functions associated with the queue length of some queuing systems.

1967 ◽  
Vol 4 (1) ◽  
pp. 192-196 ◽  
Author(s):  
J. N. Darroch ◽  
E. Seneta

In a recent paper, the authors have discussed the concept of quasi-stationary distributions for absorbing Markov chains having a finite state space, with the further restriction of discrete time. The purpose of the present note is to summarize the analogous results when the time parameter is continuous.


2007 ◽  
Vol 39 (02) ◽  
pp. 360-384 ◽  
Author(s):  
Uğur Tuncay Alparslan ◽  
Gennady Samorodnitsky

We study the ruin probability where the claim sizes are modeled by a stationary ergodic symmetric α-stable process. We exploit the flow representation of such processes, and we consider the processes generated by conservative flows. We focus on two classes of conservative α-stable processes (one discrete-time and one continuous-time), and give results for the order of magnitude of the ruin probability as the initial capital goes to infinity. We also prove a solidarity property for null-recurrent Markov chains as an auxiliary result, which might be of independent interest.


1988 ◽  
Vol 25 (A) ◽  
pp. 257-274
Author(s):  
N. U. Prabhu

We develop a theory of semiregenerative phenomena. These may be viewed as a family of linked regenerative phenomena, for which Kingman [6], [7] developed a theory within the framework of quasi-Markov chains. We use a different approach and explore the correspondence between semiregenerative sets and the range of a Markov subordinator with a unit drift (or a Markov renewal process in the discrete-time case). We use techniques based on results from Markov renewal theory.


1992 ◽  
Vol 29 (04) ◽  
pp. 838-849 ◽  
Author(s):  
Thomas Hanschke

This paper deals with a class of discrete-time Markov chains for which the invariant measures can be expressed in terms of generalized continued fractions. The representation covers a wide class of stochastic models and is well suited for numerical applications. The results obtained can easily be extended to continuous-time Markov chains.


2006 ◽  
Vol 43 (04) ◽  
pp. 1044-1052 ◽  
Author(s):  
Nico M. Van Dijk ◽  
Karel Sladký

As an extension of the discrete-time case, this note investigates the variance of the total cumulative reward for continuous-time Markov reward chains with finite state spaces. The results correspond to discrete-time results. In particular, the variance growth rate is shown to be asymptotically linear in time. Expressions are provided to compute this growth rate.


1988 ◽  
Vol 25 (1) ◽  
pp. 34-42 ◽  
Author(s):  
Jean Johnson ◽  
Dean Isaacson

Sufficient conditions for strong ergodicity of discrete-time non-homogeneous Markov chains have been given in several papers. Conditions have been given using the left eigenvectors ψn of Pn(ψ nPn = ψ n) and also using the limiting behavior of Pn. In this paper we consider the analogous results in the case of continuous-time Markov chains where one uses the intensity matrices Q(t) instead of P(s, t). A bound on the rate of convergence of certain strongly ergodic chains is also given.


2014 ◽  
Vol 2014 ◽  
pp. 1-5
Author(s):  
Mokaedi V. Lekgari

We investigate random-time state-dependent Foster-Lyapunov analysis on subgeometric rate ergodicity of continuous-time Markov chains (CTMCs). We are mainly concerned with making use of the available results on deterministic state-dependent drift conditions for CTMCs and on random-time state-dependent drift conditions for discrete-time Markov chains and transferring them to CTMCs.


1998 ◽  
Vol 7 (4) ◽  
pp. 397-401 ◽  
Author(s):  
OLLE HÄGGSTRÖM

We consider continuous time random walks on a product graph G×H, where G is arbitrary and H consists of two vertices x and y linked by an edge. For any t>0 and any a, b∈V(G), we show that the random walk starting at (a, x) is more likely to have hit (b, x) than (b, y) by time t. This contrasts with the discrete time case and proves a conjecture of Bollobás and Brightwell. We also generalize the result to cases where H is either a complete graph on n vertices or a cycle on n vertices.


2013 ◽  
Vol 50 (4) ◽  
pp. 943-959 ◽  
Author(s):  
Guan-Yu Chen ◽  
Laurent Saloff-Coste

We make a connection between the continuous time and lazy discrete time Markov chains through the comparison of cutoffs and mixing time in total variation distance. For illustration, we consider finite birth and death chains and provide a criterion on cutoffs using eigenvalues of the transition matrix.


Sign in / Sign up

Export Citation Format

Share Document