Intraday Scheduling with Patient Re-entries and Variability in Behaviours

Author(s):  
Minglong Zhou ◽  
Gar Goei Loke ◽  
Chaithanya Bandi ◽  
Zi Qiang Glen Liau ◽  
Wilson Wang

Problem definition: We consider the intraday scheduling problem in a group of orthopaedic clinics where the planner schedules appointment times, given a sequence of appointments. We consider patient re-entry—where patients may be required to go for an x-ray examination, returning to the same doctor they have seen—and variability in patient behaviours such as walk-ins, earliness, and no-shows, which leads to inefficiency such as long patient waiting time and physician overtime. Academic/practical relevance: In our data set, 25% of the patients are required to go for x-ray examination. We also found significant variability in patient behaviours. Hence, patient re-entry and variability in behaviours are common, but we found little in the literature that could handle them. Methodology: We formulate the problem as a two-stage optimization problem, where scheduling decisions are made in the first stage. Queue dynamics in the second stage are modeled under a P-Queue paradigm, which minimizes a risk index representing the chance of violating performance targets, such as patient waiting times. The model reduces to a sequence of mixed-integer linear-optimization problems. Results: Our model achieves significant reductions, in comparative studies against a sample average approximation (SAA) model, on patient waiting times, while keeping server overtime constant. Our simulations further characterize the types of uncertainties under which SAA performs poorly. Managerial insights: We present an optimization model that is easy to implement in practice and tractable to compute. Our simulations indicate that not accounting for patient re-entry or variability in patient behaviours will lead to suboptimal policies, especially when they have specific structure that should be considered.

Author(s):  
Álinson S. Xavier ◽  
Ricardo Fukasawa ◽  
Laurent Poirrier

When generating multirow intersection cuts for mixed-integer linear optimization problems, an important practical question is deciding which intersection cuts to use. Even when restricted to cuts that are facet defining for the corner relaxation, the number of potential candidates is still very large, especially for instances of large size. In this paper, we introduce a subset of intersection cuts based on the infinity norm that is very small, works for relaxations having arbitrary number of rows and, unlike many subclasses studied in the literature, takes into account the entire data from the simplex tableau. We describe an algorithm for generating these inequalities and run extensive computational experiments in order to evaluate their practical effectiveness in real-world instances. We conclude that this subset of inequalities yields, in terms of gap closure, around 50% of the benefits of using all valid inequalities for the corner relaxation simultaneously, but at a small fraction of the computational cost, and with a very small number of cuts. Summary of Contribution: Cutting planes are one of the most important techniques used by modern mixed-integer linear programming solvers when solving a variety of challenging operations research problems. The paper advances the state of the art on general-purpose multirow intersection cuts by proposing a practical and computationally friendly method to generate them.


Author(s):  
Robinson Sitepu ◽  
Fitri Maya Puspita ◽  
Elika Kurniadi ◽  
Yunita Yunita ◽  
Shintya Apriliyani

<span>The development of the internet in this era of globalization has increased fast. The need for internet becomes unlimited. Utility functions as one of measurements in internet usage, were usually associated with a level of satisfaction of users for the use of information services used. There are three internet pricing schemes used, that are flat fee, usage based and two-part tariff schemes by using one of the utility function which is Bandwidth Diminished with Increasing Bandwidth with monitoring cost and marginal cost. Internet pricing scheme will be solved by LINGO 13.0 in form of non-linear optimization problems to get optimal solution. The optimal solution is obtained using the either usage-based pricing scheme model or two-part tariff pricing scheme model for each services offered, if the comparison is with flat-fee pricing scheme. It is the best way for provider to offer network based on usage based scheme. The results show that by applying two part tariff scheme, the providers can maximize its revenue either for homogeneous or heterogeneous consumers.</span>


Author(s):  
Merve Bodur ◽  
Timothy C. Y. Chan ◽  
Ian Yihang Zhu

Inverse optimization—determining parameters of an optimization problem that render a given solution optimal—has received increasing attention in recent years. Although significant inverse optimization literature exists for convex optimization problems, there have been few advances for discrete problems, despite the ubiquity of applications that fundamentally rely on discrete decision making. In this paper, we present a new set of theoretical insights and algorithms for the general class of inverse mixed integer linear optimization problems. Specifically, a general characterization of optimality conditions is established and leveraged to design new cutting plane solution algorithms. Through an extensive set of computational experiments, we show that our methods provide substantial improvements over existing methods in solving the largest and most difficult instances to date.


2020 ◽  
Vol 66 (9) ◽  
pp. 4226-4245
Author(s):  
Somayeh Moazeni ◽  
Boris Defourny ◽  
Monika J. Wilczak

Developing marketing campaigns for a new product or a new target population is challenging because of the scarcity of relevant historical data. Building on dynamic Bayesian learning, a sequential optimization assists in creating new data points within a finite number of learning phases. This procedure identifies effective advertisement design elements as well as customer segments that maximize the expected outcome of the final marketing campaign. In this paper, the marketing campaign performance is modeled by a multiplicative advertising exposure model with Poisson arrivals. The intensity of the Poisson process is a function of the marketing campaign features. A forward-looking measurement policy is formulated to maximize the expected improvement in the value of information in each learning phase. A computationally efficient approach is proposed that consists of solving a sequence of mixed-integer linear optimization problems. The performance of the optimal learning policy over a set of benchmark policies is evaluated using examples inspired from the property and casualty insurance industry. Further extensions of the model are discussed. This paper was accepted by Eric Anderson, marketing.


Author(s):  
Vishal Gupta ◽  
Paat Rusmevichientong

Optimization applications often depend on a huge number of uncertain parameters. In many contexts, however, the amount of relevant data per parameter is small, and hence, we may only have imprecise estimates. We term this setting—in which the number of uncertainties is large but all estimates have low precision—the small-data, large-scale regime. We formalize a model for this new regime, focusing on optimization problems with uncertain linear objectives. We show that common data-driven methods, such as sample average approximation, data-driven robust optimization, and certain regularized policies, may perform poorly in this new setting. We then propose a novel framework for selecting a data-driven policy from a given policy class. As with the aforementioned data-driven methods, our new policy enjoys provably good performance in the large-sample regime. Unlike these methods, we show that in the small-data, large-scale regime, our data-driven policy performs comparably to an oracle best-in-class policy under some mild conditions. We strengthen this result for linear optimization problems and two natural policy classes, the first inspired by the empirical Bayes literature and the second by regularization techniques. For both classes, the suboptimality gap between our proposed policy and the oracle policy decays exponentially fast in the number of uncertain parameters even for a fixed amount of data. Thus, these policies retain the strong large-sample performance of traditional methods and additionally enjoy provably strong performance in the small-data, large-scale regime. Numerical experiments confirm the significant benefits of our methods. This paper was accepted by Yinyu Ye, optimization.


Sign in / Sign up

Export Citation Format

Share Document