Filtering of Markov renewal queues, I: Feedback queues

1983 ◽  
Vol 15 (02) ◽  
pp. 349-375 ◽  
Author(s):  
Jeffrey J. Hunter

Queueing systems which can be formulated as Markov renewal processes with basic transitions of three types, ‘arrivals', ‘departures' and ‘feedbacks' are examined. The filtering procedure developed for Markov renewal processes by Çinlar (1969) is applied to such queueing models to show that the queue-length processes embedded at any of the ‘arrival', ‘departure', ‘feedback', ‘input', ‘output' or ‘external' transition epochs are also Markov renewal. In this part we focus attention on the derivation of stationary and limiting distributions (when they exist) for each of the embedded discrete-time processes, the embedded Markov chains. These results are applied to birth–death queues with instantaneous state-dependent feedback including the special cases of M/M/1/N and M/M/1 queues with instantaneous Bernoulli feedback.

1983 ◽  
Vol 15 (2) ◽  
pp. 349-375 ◽  
Author(s):  
Jeffrey J. Hunter

Queueing systems which can be formulated as Markov renewal processes with basic transitions of three types, ‘arrivals', ‘departures' and ‘feedbacks' are examined. The filtering procedure developed for Markov renewal processes by Çinlar (1969) is applied to such queueing models to show that the queue-length processes embedded at any of the ‘arrival', ‘departure', ‘feedback', ‘input', ‘output' or ‘external' transition epochs are also Markov renewal. In this part we focus attention on the derivation of stationary and limiting distributions (when they exist) for each of the embedded discrete-time processes, the embedded Markov chains. These results are applied to birth–death queues with instantaneous state-dependent feedback including the special cases of M/M/1/N and M/M/1 queues with instantaneous Bernoulli feedback.


1984 ◽  
Vol 16 (2) ◽  
pp. 422-436 ◽  
Author(s):  
Jeffrey J. Hunter

In Part I (Hunter) a study of feedback queueing models was initiated. For such models the queue-length process embedded at all transition points was formulated as a Markov renewal process (MRP). This led to the observation that the queue-length processes embedded at any of the ‘arrival', ‘departure', ‘feedback', ‘input', ‘output' or ‘external' transition epochs are also MRP. Part I concentrated on the properties of the embedded discrete-time Markov chains. In this part we examine the semi-Markov processes associated with each of these embedded MRP and derive expressions for the stationary distributions associated with their irreducible subspaces. The special cases of birth-death queues with instantaneous state-dependent feedback, M/M/1/N and M/M/1 queues with instantaneous Bernoulli feedback are considered in detail. The results obtained complement those derived in Part II (Hunter) for birth-death queues without feedback.


1984 ◽  
Vol 16 (02) ◽  
pp. 422-436 ◽  
Author(s):  
Jeffrey J. Hunter

In Part I (Hunter) a study of feedback queueing models was initiated. For such models the queue-length process embedded at all transition points was formulated as a Markov renewal process (MRP). This led to the observation that the queue-length processes embedded at any of the ‘arrival', ‘departure', ‘feedback', ‘input', ‘output' or ‘external' transition epochs are also MRP. Part I concentrated on the properties of the embedded discrete-time Markov chains. In this part we examine the semi-Markov processes associated with each of these embedded MRP and derive expressions for the stationary distributions associated with their irreducible subspaces. The special cases of birth-death queues with instantaneous state-dependent feedback, M/M/1/N and M/M/1 queues with instantaneous Bernoulli feedback are considered in detail. The results obtained complement those derived in Part II (Hunter) for birth-death queues without feedback.


1985 ◽  
Vol 17 (2) ◽  
pp. 386-407 ◽  
Author(s):  
Jeffrey J. Hunter

This paper is a continuation of the study of a class of queueing systems where the queue-length process embedded at basic transition points, which consist of ‘arrivals’, ‘departures’ and ‘feedbacks’, is a Markov renewal process (MRP). The filtering procedure of Çinlar (1969) was used in [12] to show that the queue length process embedded separately at ‘arrivals’, ‘departures’, ‘feedbacks’, ‘inputs’ (arrivals and feedbacks), ‘outputs’ (departures and feedbacks) and ‘external’ transitions (arrivals and departures) are also MRP. In this paper expressions for the elements of each Markov renewal kernel are derived, and thence expressions for the distribution of the times between transitions, under stationary conditions, are found for each of the above flow processes. In particular, it is shown that the inter-event distributions for the arrival process and the departure process are the same, with an equivalent result holding for inputs and outputs. Further, expressions for the stationary joint distributions of successive intervals between events in each flow process are derived and interconnections, using the concept of reversed Markov renewal processes, are explored. Conditions under which any of the flow processes are renewal processes or, more particularly, Poisson processes are also investigated. Special cases including, in particular, the M/M/1/N and M/M/1 model with instantaneous Bernoulli feedback, are examined.


1985 ◽  
Vol 17 (02) ◽  
pp. 386-407
Author(s):  
Jeffrey J. Hunter

This paper is a continuation of the study of a class of queueing systems where the queue-length process embedded at basic transition points, which consist of ‘arrivals’, ‘departures’ and ‘feedbacks’, is a Markov renewal process (MRP). The filtering procedure of Çinlar (1969) was used in [12] to show that the queue length process embedded separately at ‘arrivals’, ‘departures’, ‘feedbacks’, ‘inputs’ (arrivals and feedbacks), ‘outputs’ (departures and feedbacks) and ‘external’ transitions (arrivals and departures) are also MRP. In this paper expressions for the elements of each Markov renewal kernel are derived, and thence expressions for the distribution of the times between transitions, under stationary conditions, are found for each of the above flow processes. In particular, it is shown that the inter-event distributions for the arrival process and the departure process are the same, with an equivalent result holding for inputs and outputs. Further, expressions for the stationary joint distributions of successive intervals between events in each flow process are derived and interconnections, using the concept of reversed Markov renewal processes, are explored. Conditions under which any of the flow processes are renewal processes or, more particularly, Poisson processes are also investigated. Special cases including, in particular, the M/M/1/N and M/M/1 model with instantaneous Bernoulli feedback, are examined.


1983 ◽  
Vol 15 (2) ◽  
pp. 376-391 ◽  
Author(s):  
Jeffrey J. Hunter

In this part we extend and particularise results developed by the author in Part I (pp. 349–375) for a class of queueing systems which can be formulated as Markov renewal processes. We examine those models where the basic transition consists of only two types: ‘arrivals' and ‘departures'. The ‘arrival lobby' and ‘departure lobby' queue-length processes are shown, using the results of Part I to be Markov renewal. Whereas the initial study focused attention on the behaviour of the embedded discrete-time Markov chains, in this paper we examine, in detail, the embedded continuous-time semi-Markov processes. The limiting distributions of the queue-length processes in both continuous and discrete time are derived and interrelationships between them are examined in the case of continuous-time birth–death queues including the M/M/1/M and M/M/1 variants. Results for discrete-time birth–death queues are also derived.


1983 ◽  
Vol 15 (02) ◽  
pp. 376-391 ◽  
Author(s):  
Jeffrey J. Hunter

In this part we extend and particularise results developed by the author in Part I (pp. 349–375) for a class of queueing systems which can be formulated as Markov renewal processes. We examine those models where the basic transition consists of only two types: ‘arrivals' and ‘departures'. The ‘arrival lobby' and ‘departure lobby' queue-length processes are shown, using the results of Part I to be Markov renewal. Whereas the initial study focused attention on the behaviour of the embedded discrete-time Markov chains, in this paper we examine, in detail, the embedded continuous-time semi-Markov processes. The limiting distributions of the queue-length processes in both continuous and discrete time are derived and interrelationships between them are examined in the case of continuous-time birth–death queues including the M/M/1/M and M/M/1 variants. Results for discrete-time birth–death queues are also derived.


1993 ◽  
Vol 25 (3) ◽  
pp. 585-606 ◽  
Author(s):  
C. Teresa Lam

In this paper, we study the superposition of finitely many Markov renewal processes with countable state spaces. We define the S-Markov renewal equations associated with the superposed process. The solutions of the S-Markov renewal equations are derived and the asymptotic behaviors of these solutions are studied. These results are applied to calculate various characteristics of queueing systems with superposition semi-Markovian arrivals, queueing networks with bulk service, system availability, and continuous superposition remaining and current life processes.


1971 ◽  
Vol 3 (1) ◽  
pp. 155-175 ◽  
Author(s):  
Manfred Schäl

In this paper, some results on the asymptotic behavior of Markov renewal processes with auxiliary paths (MRPAP's) proved in other papers ([28], [29]) are applied to queueing theory. This approach to queueing problems may be regarded as an improvement of the method of Fabens [7] based on the theory of semi-Markov processes. The method of Fabens was also illustrated by Lambotte in [18], [32]. In the present paper the ordinary M/G/1 queue is generalized to allow service times to depend on the queue length immediately after the previous departure. Such models preserve the MRPAP-structure of the ordinary M/G/1 system. Recently, the asymptotic behaviour of the embedded Markov chain (MC) of this queueing model was studied by several authors. One aim of this paper is to answer the question of the relationship between the limiting distribution of the embedded MC and the limiting distribution of the original process with continuous time parameter. It turns out that these two limiting distributions coincide. Moreover some properties of the embedded MC and the embedded semi-Markov process are established. The discussion of the M/G/1 queue closes with a study of the rate-of-convergence at which the queueing process attains equilibrium.


2019 ◽  
Vol 53 (2) ◽  
pp. 367-387
Author(s):  
Shaojun Lan ◽  
Yinghui Tang

This paper deals with a single-server discrete-time Geo/G/1 queueing model with Bernoulli feedback and N-policy where the server leaves for modified multiple vacations once the system becomes empty. Applying the law of probability decomposition, the renewal theory and the probability generating function technique, we explicitly derive the transient queue length distribution as well as the recursive expressions of the steady-state queue length distribution. Especially, some corresponding results under special cases are directly obtained. Furthermore, some numerical results are provided for illustrative purposes. Finally, a cost optimization problem is numerically analyzed under a given cost structure.


Sign in / Sign up

Export Citation Format

Share Document