Optimal control of batch service queues with switching costs

1976 ◽  
Vol 8 (1) ◽  
pp. 177-194 ◽  
Author(s):  
Rajat K. Deb

We consider a batch service queue which is controlled by switching the server on and off, and by controlling the batch size and timing of services. These batch sizes cannot exceed a fixed number Q, which we call the service capacity. Costs are charged for switching the server on and off, for serving customers and for holding them in the system. Viewing the system as a semi-Markov decision process, we show that the policies which minimize the expected continuously discounted cost and the expected cost per unit time over an infinite time horizon are of the following form: at a review point if the server is off, leave the server off until the number of customers x reaches an optimal level M, then turn the server on and serve min (x, Q) customers; and when the server is on, serve customers in batches of size min(x, Q) until the number of customers falls below an optimal level m(m ≦ M) and then turn the server off. An example for computing these optimal levels is also presented.

1976 ◽  
Vol 8 (01) ◽  
pp. 177-194 ◽  
Author(s):  
Rajat K. Deb

We consider a batch service queue which is controlled by switching the server on and off, and by controlling the batch size and timing of services. These batch sizes cannot exceed a fixed number Q, which we call the service capacity. Costs are charged for switching the server on and off, for serving customers and for holding them in the system. Viewing the system as a semi-Markov decision process, we show that the policies which minimize the expected continuously discounted cost and the expected cost per unit time over an infinite time horizon are of the following form: at a review point if the server is off, leave the server off until the number of customers x reaches an optimal level M, then turn the server on and serve min (x, Q) customers; and when the server is on, serve customers in batches of size min(x, Q) until the number of customers falls below an optimal level m(m ≦ M) and then turn the server off. An example for computing these optimal levels is also presented.


1973 ◽  
Vol 5 (2) ◽  
pp. 340-361 ◽  
Author(s):  
Rajat K. Deb ◽  
Richard F. Serfozo

A batch service queue is considered where each batch size and its time of service is subject to control. Costs are incurred for serving the customers and for holding them in the system. Viewing the system as a Markov decision process (i.e., dynamic program) with unbounded costs, we show that policies which minimize the expected continuously discounted cost and the expected cost per unit time over an infinite time horizon are of the form: at a review point when x customers are waiting, serve min {x, Q} customers (Q being the, possibly infinite, service capacity) if and only if x exceeds a certain optimal level M. Methods of computing M for both the discounted and average cost contexts are presented.


1973 ◽  
Vol 5 (02) ◽  
pp. 340-361 ◽  
Author(s):  
Rajat K. Deb ◽  
Richard F. Serfozo

A batch service queue is considered where each batch size and its time of service is subject to control. Costs are incurred for serving the customers and for holding them in the system. Viewing the system as a Markov decision process (i.e., dynamic program) with unbounded costs, we show that policies which minimize the expected continuously discounted cost and the expected cost per unit time over an infinite time horizon are of the form: at a review point when x customers are waiting, serve min {x, Q} customers (Q being the, possibly infinite, service capacity) if and only if x exceeds a certain optimal level M. Methods of computing M for both the discounted and average cost contexts are presented.


Mathematics ◽  
2021 ◽  
Vol 9 (4) ◽  
pp. 419
Author(s):  
Achyutha Krishnamoorthy ◽  
Anu Nuthan Joshua ◽  
Dmitry Kozyrev

A single-server queuing-inventory system in which arrivals are governed by a batch Markovian arrival process and successive arrival batch sizes form a finite first-order Markov chain is considered in this paper. Service is provided in batches according to a batch Markovian service process, with consecutive service batch sizes forming a finite first-order Markov chain. A service starts for the next batch on completion of the current service, provided that inventory is available at that epoch; otherwise, there will be a delay in starting the next service. When the service of a batch is completed, the inventory decreases by 1 unit, irrespective of batch size. A control policy in which the server goes on vacation when a service process is frozen until a quorum can initiate the next batch service is proposed to ensure idle-time utilization. During the vacation, the server produces inventory (items) for future services until it hits a specified level L or until the number of customers in the system reaches a maximum service batch size N, with whichever occurring first. In the former case, a server stays idle once the processed inventory level reaches L until the number of customers reaches (or even exceeds because of batch arrival) a maximum service batch size N. The time required for processing one unit of inventory follows a phase-type distribution. In this paper, the steady-state probability vector of this infinite system is computed. The distributions of inventory processing time in a vacation cycle, idle time in a vacation cycle, and vacation cycle length are found. The effect of correlation in successive inter-arrival times and service times on performance measures for such a queuing system is illustrated with a numerical example. An optimization problem is considered. The proposed system is then compared with a queuing-inventory system without the Markov-dependent assumption on successive arrivals as well as service batch sizes using numerical examples.


Author(s):  
Chaochao Lin ◽  
Matteo Pozzi

Optimal exploration of engineering systems can be guided by the principle of Value of Information (VoI), which accounts for the topological important of components, their reliability and the management costs. For series systems, in most cases higher inspection priority should be given to unreliable components. For redundant systems such as parallel systems, analysis of one-shot decision problems shows that higher inspection priority should be given to more reliable components. This paper investigates the optimal exploration of redundant systems in long-term decision making with sequential inspection and repairing. When the expected, cumulated, discounted cost is considered, it may become more efficient to give higher inspection priority to less reliable components, in order to preserve system redundancy. To investigate this problem, we develop a Partially Observable Markov Decision Process (POMDP) framework for sequential inspection and maintenance of redundant systems, where the VoI analysis is embedded in the optimal selection of exploratory actions. We investigate the use of alternative approximate POMDP solvers for parallel and more general systems, compare their computation complexities and performance, and show how the inspection priorities depend on the economic discount factor, the degradation rate, the inspection precision, and the repair cost.


Author(s):  
Nicole Bäuerle ◽  
Alexander Glauner

AbstractWe study the minimization of a spectral risk measure of the total discounted cost generated by a Markov Decision Process (MDP) over a finite or infinite planning horizon. The MDP is assumed to have Borel state and action spaces and the cost function may be unbounded above. The optimization problem is split into two minimization problems using an infimum representation for spectral risk measures. We show that the inner minimization problem can be solved as an ordinary MDP on an extended state space and give sufficient conditions under which an optimal policy exists. Regarding the infinite dimensional outer minimization problem, we prove the existence of a solution and derive an algorithm for its numerical approximation. Our results include the findings in Bäuerle and Ott (Math Methods Oper Res 74(3):361–379, 2011) in the special case that the risk measure is Expected Shortfall. As an application, we present a dynamic extension of the classical static optimal reinsurance problem, where an insurance company minimizes its cost of capital.


2020 ◽  
Vol 54 (4) ◽  
pp. 1016-1033 ◽  
Author(s):  
Marlin W. Ulmer

An increasing number of e-commerce retailers offers same-day delivery. To deliver the ordered goods, providers dynamically dispatch a fleet of vehicles transporting the goods from the warehouse to the customers. In many cases, retailers offer different delivery deadline options, from four-hour delivery up to next-hour delivery. Due to the deadlines, vehicles often only deliver a few orders per trip. The overall number of served orders within the delivery horizon is small and the revenue low. As a result, many companies currently struggle to conduct same-day delivery cost-efficiently. In this paper, we show how dynamic pricing is able to substantially increase both revenue and the number of customers we are able to serve the same day. To this end, we present an anticipatory pricing and routing policy (APRP) method that incentivizes customers to select delivery deadline options efficiently for the fleet to fulfill. This maintains the fleet’s flexibility to serve more future orders. We model the respective pricing and routing problem as a Markov decision process (MDP). To apply APRP, the state-dependent opportunity costs per customer and option are required. To this end, we use a guided offline value function approximation (VFA) based on state space aggregation. The VFA approximates the opportunity cost for every state and delivery option with respect to the fleet’s flexibility. As an offline method, APRP is able to determine suitable prices instantly when a customer orders. In an extensive computational study, we compare APRP with a policy based on fixed prices and with conventional temporal and geographical pricing policies. APRP outperforms the benchmark policies significantly, leading to both a higher revenue and more customers served the same day.


Sign in / Sign up

Export Citation Format

Share Document