Resource pooling in queueing networks with dynamic routing

1992 ◽  
Vol 24 (3) ◽  
pp. 699-726 ◽  
Author(s):  
C. N. Laws

In this paper we investigate dynamic routing in queueing networks. We show that there is a heavy traffic limiting regime in which a network model based on Brownian motion can be used to approximate and solve an optimal control problem for a queueing network with multiple customer types. Under the solution of this approximating problem the network behaves as if the service-stations of the original system are combined to form a single pooled resource. This resource pooling is a result of dynamic routing, it can be achieved by a form of shortest expected delay routing, and we find that dynamic routing can offer substantial improvements in comparison with less responsive routing strategies.

1992 ◽  
Vol 24 (03) ◽  
pp. 699-726 ◽  
Author(s):  
C. N. Laws

In this paper we investigate dynamic routing in queueing networks. We show that there is a heavy traffic limiting regime in which a network model based on Brownian motion can be used to approximate and solve an optimal control problem for a queueing network with multiple customer types. Under the solution of this approximating problem the network behaves as if the service-stations of the original system are combined to form a single pooled resource. This resource pooling is a result of dynamic routing, it can be achieved by a form of shortest expected delay routing, and we find that dynamic routing can offer substantial improvements in comparison with less responsive routing strategies.


2011 ◽  
Vol 48 (01) ◽  
pp. 145-153 ◽  
Author(s):  
Chihoon Lee

We consider a d-dimensional reflected fractional Brownian motion (RFBM) process on the positive orthant S = R + d , with drift r 0 ∈ R d and Hurst parameter H ∈ (½, 1). Under a natural stability condition on the drift vector r 0 and reflection directions, we establish a return time result for the RFBM process Z; that is, for some δ, κ > 0, sup x∈B E x [τ B (δ)] < ∞, where B = {x ∈ S : |x| ≤ κ} and τ B (δ) = inf{t ≥ δ : Z(t) ∈ B}. Similar results are known for reflected processes driven by standard Brownian motions, and our result can be viewed as their FBM counterpart. Our motivation for this study is that RFBM appears as a limiting workload process for fluid queueing network models fed by a large number of heavy-tailed ON/OFF sources in heavy traffic.


2022 ◽  
Author(s):  
Varun Gupta ◽  
Jiheng Zhang

The paper studies approximations and control of a processor sharing (PS) server where the service rate depends on the number of jobs occupying the server. The control of such a system is implemented by imposing a limit on the number of jobs that can share the server concurrently, with the rest of the jobs waiting in a first-in-first-out (FIFO) buffer. A desirable control scheme should strike the right balance between efficiency (operating at a high service rate) and parallelism (preventing small jobs from getting stuck behind large ones). We use the framework of heavy-traffic diffusion analysis to devise near optimal control heuristics for such a queueing system. However, although the literature on diffusion control of state-dependent queueing systems begins with a sequence of systems and an exogenously defined drift function, we begin with a finite discrete PS server and propose an axiomatic recipe to explicitly construct a sequence of state-dependent PS servers that then yields a drift function. We establish diffusion approximations and use them to obtain insightful and closed-form approximations for the original system under a static concurrency limit control policy. We extend our study to control policies that dynamically adjust the concurrency limit. We provide two novel numerical algorithms to solve the associated diffusion control problem. Our algorithms can be viewed as “average cost” iteration: The first algorithm uses binary-search on the average cost, while the second faster algorithm uses Newton-Raphson method for root finding. Numerical experiments demonstrate the accuracy of our approximation for choosing optimal or near-optimal static and dynamic concurrency control heuristics.


2011 ◽  
Vol 48 (03) ◽  
pp. 820-831
Author(s):  
Chihoon Lee

We study a d-dimensional reflected fractional Brownian motion (RFBM) process on the positive orthant S = ℝ+ d , with drift r 0 ∈ ℝ d and Hurst parameter H ∈ (½, 1). Under a natural stability condition on the drift vector r 0 and reflection directions, we establish a geometric drift towards a compact set for the 1-skeleton chain Ž̆ of the RFBM process Z; that is, there exist β, b ∈ (0, ∞) and a compact set C ⊂ S such that ΔV(x):= E x [V(Ž̆(1))] − V(x) ≤ −βV(x) + b 1 C (x), x ∈ S, for an exponentially growing Lyapunov function V : S → [1, ∞). For a wide class of Markov processes, such a drift inequality is known as a necessary and sufficient condition for exponential ergodicity. Indeed, similar drift inequalities have been established for reflected processes driven by standard Brownian motions, and our result can be viewed as their fractional Brownian motion counterpart. We also establish that the return times to the set C itself are geometrically bounded. Motivation for this study is that RFBM appears as a limiting workload process for fluid queueing network models fed by a large number of heavy-tailed ON/OFF sources in heavy traffic.


1996 ◽  
Vol 33 (03) ◽  
pp. 870-885
Author(s):  
William P. Peterson ◽  
Lawrence M. Wein

We study a model of a stochastic transportation system introduced by Crane. By adapting constructions of multidimensional reflected Brownian motion (RBM) that have since been developed for feedforward queueing networks, we generalize Crane's original functional central limit theorem results to a full vector setting, giving an explicit development for the case in which all terminals in the model experience heavy traffic conditions. We investigate product form conditions for the stationary distribution of our resulting RBM limit, and contrast our results for transportation networks with those for traditional queueing network models.


2018 ◽  
Vol 24 (2) ◽  
pp. 873-899 ◽  
Author(s):  
Mingshang Hu ◽  
Falei Wang

The present paper considers a stochastic optimal control problem, in which the cost function is defined through a backward stochastic differential equation with infinite horizon driven by G-Brownian motion. Then we study the regularities of the value function and establish the dynamic programming principle. Moreover, we prove that the value function is the unique viscosity solution of the related Hamilton−Jacobi−Bellman−Isaacs (HJBI) equation.


2000 ◽  
Vol 37 (01) ◽  
pp. 212-223 ◽  
Author(s):  
Stephen R. E. Turner

We prove a new heavy traffic limit result for a simple queueing network under a ‘join the shorter queue’ policy, with the amount of traffic which has a routeing choice tending to zero as heavy traffic is approached. In this limit, the system considered does not exhibit state space collapse as in previous work by Foschini and Salz, and Reiman, but there is nevertheless some resource pooling gain over a policy of random routeing.


2019 ◽  
Vol 25 ◽  
pp. 31 ◽  
Author(s):  
Fulvia Confortola ◽  
Andrea Cosso ◽  
Marco Fuhrman

We study an optimal control problem on infinite horizon for a controlled stochastic differential equation driven by Brownian motion, with a discounted reward functional. The equation may have memory or delay effects in the coefficients, both with respect to state and control, and the noise can be degenerate. We prove that the value, i.e. the supremum of the reward functional over all admissible controls, can be represented by the solution of an associated backward stochastic differential equation (BSDE) driven by the Brownian motion and an auxiliary independent Poisson process and having a sign constraint on jumps. In the Markovian case when the coefficients depend only on the present values of the state and the control, we prove that the BSDE can be used to construct the solution, in the sense of viscosity theory, to the corresponding Hamilton-Jacobi-Bellman partial differential equation of elliptic type on the whole space, so that it provides us with a Feynman-Kac representation in this fully nonlinear context. The method of proof consists in showing that the value of the original problem is the same as the value of an auxiliary optimal control problem (called randomized), where the control process is replaced by a fixed pure jump process and maximization is taken over a class of absolutely continuous changes of measures which affect the stochastic intensity of the jump process but leave the law of the driving Brownian motion unchanged.


Sign in / Sign up

Export Citation Format

Share Document