Optimal control of an M/G/1 queue with imperfectly observed queue length when the input source is finite

1991 ◽  
Vol 28 (01) ◽  
pp. 210-220
Author(s):  
Kazuyoshi Wakuta

We consider the optimal control of an M/G/1 queue with finite input source. The queue length, however, can be only imperfectly observed through the observations at the initial time and the times of successive departures. At these times, the service rate can be chosen, based on the observable histories. A service cost and a holding cost are incurred. We show that such a control problem can be formulated as a semi-Markov decision process with imperfect state information, and present sufficient conditions for the existence of an optimal stationary I-policy.

1991 ◽  
Vol 28 (1) ◽  
pp. 210-220 ◽  
Author(s):  
Kazuyoshi Wakuta

We consider the optimal control of an M/G/1 queue with finite input source. The queue length, however, can be only imperfectly observed through the observations at the initial time and the times of successive departures. At these times, the service rate can be chosen, based on the observable histories. A service cost and a holding cost are incurred. We show that such a control problem can be formulated as a semi-Markov decision process with imperfect state information, and present sufficient conditions for the existence of an optimal stationary I-policy.


1978 ◽  
Vol 10 (3) ◽  
pp. 682-701 ◽  
Author(s):  
Bharat T. Doshi

We consider an M/G/1 queue in which the service rate is subject to control. The control is exercised continuously and is based on the observations of the residual workload process. For both the discounted cost and the average cost criteria we obtain conditions which are sufficient for a stationary policy to be optimal. When the service cost rate and the holding cost rates are non-decreasing and convex it is shown that these sufficient conditions are satisfied by a monotonic policy, thus showing its optimality.


1978 ◽  
Vol 10 (03) ◽  
pp. 682-701 ◽  
Author(s):  
Bharat T. Doshi

We consider an M/G/1 queue in which the service rate is subject to control. The control is exercised continuously and is based on the observations of the residual workload process. For both the discounted cost and the average cost criteria we obtain conditions which are sufficient for a stationary policy to be optimal. When the service cost rate and the holding cost rates are non-decreasing and convex it is shown that these sufficient conditions are satisfied by a monotonic policy, thus showing its optimality.


2015 ◽  
Vol 47 (1) ◽  
pp. 106-127 ◽  
Author(s):  
François Dufour ◽  
Alexei B. Piunovskiy

In this paper our objective is to study continuous-time Markov decision processes on a general Borel state space with both impulsive and continuous controls for the infinite time horizon discounted cost. The continuous-time controlled process is shown to be nonexplosive under appropriate hypotheses. The so-called Bellman equation associated to this control problem is studied. Sufficient conditions ensuring the existence and the uniqueness of a bounded measurable solution to this optimality equation are provided. Moreover, it is shown that the value function of the optimization problem under consideration satisfies this optimality equation. Sufficient conditions are also presented to ensure on the one hand the existence of an optimal control strategy, and on the other hand the existence of a ε-optimal control strategy. The decomposition of the state space into two disjoint subsets is exhibited where, roughly speaking, one should apply a gradual action or an impulsive action correspondingly to obtain an optimal or ε-optimal strategy. An interesting consequence of our previous results is as follows: the set of strategies that allow interventions at time t = 0 and only immediately after natural jumps is a sufficient set for the control problem under consideration.


2015 ◽  
Vol 47 (01) ◽  
pp. 106-127 ◽  
Author(s):  
François Dufour ◽  
Alexei B. Piunovskiy

In this paper our objective is to study continuous-time Markov decision processes on a general Borel state space with both impulsive and continuous controls for the infinite time horizon discounted cost. The continuous-time controlled process is shown to be nonexplosive under appropriate hypotheses. The so-called Bellman equation associated to this control problem is studied. Sufficient conditions ensuring the existence and the uniqueness of a bounded measurable solution to this optimality equation are provided. Moreover, it is shown that the value function of the optimization problem under consideration satisfies this optimality equation. Sufficient conditions are also presented to ensure on the one hand the existence of an optimal control strategy, and on the other hand the existence of a ε-optimal control strategy. The decomposition of the state space into two disjoint subsets is exhibited where, roughly speaking, one should apply a gradual action or an impulsive action correspondingly to obtain an optimal or ε-optimal strategy. An interesting consequence of our previous results is as follows: the set of strategies that allow interventions at time t = 0 and only immediately after natural jumps is a sufficient set for the control problem under consideration.


1979 ◽  
Vol 16 (03) ◽  
pp. 618-630
Author(s):  
Bharat T. Doshi

Various authors have derived the necessary and sufficient conditions for optimality in semi-Markov decision processes in which the state remains constant between jumps. In this paper similar results are presented for a generalized semi-Markov decision process in which the state varies between jumps according to a Markov process with continuous sample paths. These results are specialized to a general storage model and an application to the service rate control in a GI/G/1 queue is indicated.


1993 ◽  
Vol 7 (1) ◽  
pp. 69-83 ◽  
Author(s):  
Linn I. Sennott

A Markov decision chain with denumerable state space incurs two types of costs — for example, an operating cost and a holding cost. The objective is to minimize the expected average operating cost, subject to a constraint on the expected average holding cost. We prove the existence of an optimal constrained randomized stationary policy, for which the two stationary policies differ on at most one state. The examples treated are a packet communication system with reject option and a single-server queue with service rate control.


1979 ◽  
Vol 16 (3) ◽  
pp. 618-630 ◽  
Author(s):  
Bharat T. Doshi

Various authors have derived the necessary and sufficient conditions for optimality in semi-Markov decision processes in which the state remains constant between jumps. In this paper similar results are presented for a generalized semi-Markov decision process in which the state varies between jumps according to a Markov process with continuous sample paths. These results are specialized to a general storage model and an application to the service rate control in a GI/G/1 queue is indicated.


Sign in / Sign up

Export Citation Format

Share Document