Stochastic Systems
Latest Publications


TOTAL DOCUMENTS

189
(FIVE YEARS 66)

H-INDEX

15
(FIVE YEARS 3)

Published By Institute For Operations Research And The Management Sciences

1946-5238, 1946-5238

2022 ◽  
Author(s):  
Varun Gupta ◽  
Jiheng Zhang

The paper studies approximations and control of a processor sharing (PS) server where the service rate depends on the number of jobs occupying the server. The control of such a system is implemented by imposing a limit on the number of jobs that can share the server concurrently, with the rest of the jobs waiting in a first-in-first-out (FIFO) buffer. A desirable control scheme should strike the right balance between efficiency (operating at a high service rate) and parallelism (preventing small jobs from getting stuck behind large ones). We use the framework of heavy-traffic diffusion analysis to devise near optimal control heuristics for such a queueing system. However, although the literature on diffusion control of state-dependent queueing systems begins with a sequence of systems and an exogenously defined drift function, we begin with a finite discrete PS server and propose an axiomatic recipe to explicitly construct a sequence of state-dependent PS servers that then yields a drift function. We establish diffusion approximations and use them to obtain insightful and closed-form approximations for the original system under a static concurrency limit control policy. We extend our study to control policies that dynamically adjust the concurrency limit. We provide two novel numerical algorithms to solve the associated diffusion control problem. Our algorithms can be viewed as “average cost” iteration: The first algorithm uses binary-search on the average cost, while the second faster algorithm uses Newton-Raphson method for root finding. Numerical experiments demonstrate the accuracy of our approximation for choosing optimal or near-optimal static and dynamic concurrency control heuristics.


2021 ◽  
Author(s):  
Anton Braverman

This paper uses the generator comparison approach of Stein’s method to analyze the gap between steady-state distributions of Markov chains and diffusion processes. The “standard” generator comparison approach starts with the Poisson equation for the diffusion, and the main technical difficulty is to obtain bounds on the derivatives of the solution to the Poisson equation, also known as Stein factor bounds. In this paper we propose starting with the Poisson equation of the Markov chain; we term this the prelimit approach. Although one still needs Stein factor bounds, they now correspond to finite differences of the Markov chain Poisson equation solution rather than the derivatives of the solution to the diffusion Poisson equation. In certain cases, the former are easier to obtain. We use the [Formula: see text] model as a simple working example to illustrate our approach.


2021 ◽  
Vol 11 (4) ◽  
pp. 349-351

The journal is pleased to publish the abstracts of the winner and finalists of the 2019 Applied Probability Society’s student paper competition. The 2019 student paper prize committee was chaired by Amy Ward. The 2019 committee members are (in alphabetical order by last name): Reza Aghajani, Pelin Canbolat, Jing Dong, Johan van Leeuwaarden, Ilya Ryzhov, Assaf Zeevi, Jiheng Zhang, and Serhan Ziya.


2021 ◽  
Author(s):  
Justin Sirignano ◽  
Konstantinos Spiliopoulos

We prove that a single-layer neural network trained with the Q-learning algorithm converges in distribution to a random ordinary differential equation as the size of the model and the number of training steps become large. Analysis of the limit differential equation shows that it has a unique stationary solution that is the solution of the Bellman equation, thus giving the optimal control for the problem. In addition, we study the convergence of the limit differential equation to the stationary solution. As a by-product of our analysis, we obtain the limiting behavior of single-layer neural networks when trained on independent and identically distributed data with stochastic gradient descent under the widely used Xavier initialization.


2021 ◽  
Author(s):  
Mine Su Erturk ◽  
Kuang Xu

We propose and analyze a recipient-anonymous stochastic routing model to study a fundamental trade-off between anonymity and routing delay. An agent wants to quickly reach a goal vertex in a network through a sequence of routing actions, whereas an overseeing adversary observes the agent’s entire trajectory and tries to identify the agent’s goal among those vertices traversed. We are interested in understanding the probability that the adversary can correctly identify the agent’s goal (anonymity) as a function of the time it takes the agent to reach it (delay). A key feature of our model is the presence of intrinsic uncertainty in the environment, so that each of the agent’s intended steps is subject to random perturbation and thus may not materialize as planned. Using large-network asymptotics, our main results provide near-optimal characterization of the anonymity–delay trade-off under a number of network topologies. Our main technical contributions are centered on a new class of “noise-harnessing” routing strategies that adaptively combine intrinsic uncertainty from the environment with additional artificial randomization to achieve provably efficient obfuscation.


2021 ◽  
Author(s):  
Nick Arnosti

This paper studies the performance of greedy matching algorithms on bipartite graphs [Formula: see text]. We focus primarily on three classical algorithms: [Formula: see text], which sequentially selects random edges from [Formula: see text]; [Formula: see text], which sequentially matches random vertices in [Formula: see text] to random neighbors; and [Formula: see text], which generates a random priority order over vertices in [Formula: see text] and then sequentially matches random vertices in [Formula: see text] to their highest-priority remaining neighbor. Prior work has focused on identifying the worst-case approximation ratio for each algorithm. This guarantee is highest for [Formula: see text] and lowest for [Formula: see text]. Our work instead studies the average performance of these algorithms when the edge set [Formula: see text] is random. Our first result compares [Formula: see text] and [Formula: see text] and shows that on average, [Formula: see text] produces more matches. This result holds for finite graphs (in contrast to previous asymptotic results) and also applies to “many to one” matching in which each vertex in [Formula: see text] can match with multiple vertices in [Formula: see text]. Our second result compares [Formula: see text] and [Formula: see text] and shows that the better worst-case guarantee of [Formula: see text] does not translate into better average performance. In “one to one” settings where each vertex in [Formula: see text] can match with only one vertex in [Formula: see text], the algorithms result in the same number of matches. When each vertex in [Formula: see text] can match with two vertices in [Formula: see text] produces more matches than [Formula: see text].


2021 ◽  
Author(s):  
Rami Atar ◽  
Prasenjit Karmakar ◽  
David Lipshutz

We study a many-server queueing model with server vacations, where the population size dynamics of servers and customers are coupled: a server may leave for vacation only when no customers await, and the capacity available to customers is directly affected by the number of servers on vacation. We focus on scaling regimes in which server dynamics and queue dynamics fluctuate at matching time scales so that their limiting dynamics are coupled. Specifically, we argue that interesting coupled dynamics occur in (a) the Halfin–Whitt regime, (b) the nondegenerate slowdown regime, and (c) the intermediate near Halfin–Whitt regime, whereas the dynamics asymptotically decouple in the other heavy-traffic regimes. We characterize the limiting dynamics, which are different for each scaling regime. We consider relevant respective performance measures for regimes (a) and (b)—namely, the probability of wait and the slowdown. Although closed-form formulas for these performance measures have been derived for models that do not accommodate server vacations, it is difficult to obtain closed-form formulas for these performance measures in the setting with server vacations. Instead, we propose formulas that approximate these performance measures and depend on the steady-state mean number of available servers and previously derived formulas for models without server vacations. We test the accuracy of these formulas numerically.


2021 ◽  
Author(s):  
Charles-Albert Lehalle ◽  
Othmane Mounjid ◽  
Mathieu Rosenbaum

We consider an agent who needs to buy (or sell) a relatively small amount of assets over some fixed short time interval. We work at the highest frequency meaning that we wish to find the optimal tactic to execute our quantity using limit orders, market orders, and cancellations. To solve the agent’s control problem, we build an order book model and optimize an expected utility function based on our price impact. We derive the equations satisfied by the optimal strategy and solve them numerically. Moreover, we show that our optimal tactic enables us to outperform significantly naive execution strategies.


2021 ◽  
Author(s):  
Tianyi Liu ◽  
Zhehui Chen ◽  
Enlu Zhou ◽  
Tuo Zhao

Momentum stochastic gradient descent (MSGD) algorithm has been widely applied to many nonconvex optimization problems in machine learning (e.g., training deep neural networks, variational Bayesian inference, etc.). Despite its empirical success, there is still a lack of theoretical understanding of convergence properties of MSGD. To fill this gap, we propose to analyze the algorithmic behavior of MSGD by diffusion approximations for nonconvex optimization problems with strict saddle points and isolated local optima. Our study shows that the momentum helps escape from saddle points but hurts the convergence within the neighborhood of optima (if without the step size annealing or momentum annealing). Our theoretical discovery partially corroborates the empirical success of MSGD in training deep neural networks.


Sign in / Sign up

Export Citation Format

Share Document