Deep lifted decision rules for two-stage adaptive optimization problems

Author(s):  
Said Rahal ◽  
Zukui Li ◽  
Dimitri J. Papageorgiou
2021 ◽  
Author(s):  
Dimitris Bertsimas ◽  
Shimrit Shtern ◽  
Bradley Sturt

In “Two-Stage Sample Robust Optimization,” Bertsimas, Shtern, and Sturt investigate a simple approximation scheme, based on overlapping linear decision rules, for solving data-driven two-stage distributionally robust optimization problems with the type-infinity Wasserstein ambiguity set. Their main result establishes that this approximation scheme is asymptotically optimal for two-stage stochastic linear optimization problems; that is, under mild assumptions, the optimal cost and optimal first-stage decisions obtained by approximating the robust optimization problem converge to those of the underlying stochastic problem as the number of data points grows to infinity. These guarantees notably apply to two-stage stochastic problems that do not have relatively complete recourse, which arise frequently in applications. In this context, the authors show through numerical experiments that the approximation scheme is practically tractable and produces decisions that significantly outperform those obtained from state-of-the-art data-driven alternatives.


Author(s):  
Lu Chen ◽  
Handing Wang ◽  
Wenping Ma

AbstractReal-world optimization applications in complex systems always contain multiple factors to be optimized, which can be formulated as multi-objective optimization problems. These problems have been solved by many evolutionary algorithms like MOEA/D, NSGA-III, and KnEA. However, when the numbers of decision variables and objectives increase, the computation costs of those mentioned algorithms will be unaffordable. To reduce such high computation cost on large-scale many-objective optimization problems, we proposed a two-stage framework. The first stage of the proposed algorithm combines with a multi-tasking optimization strategy and a bi-directional search strategy, where the original problem is reformulated as a multi-tasking optimization problem in the decision space to enhance the convergence. To improve the diversity, in the second stage, the proposed algorithm applies multi-tasking optimization to a number of sub-problems based on reference points in the objective space. In this paper, to show the effectiveness of the proposed algorithm, we test the algorithm on the DTLZ and LSMOP problems and compare it with existing algorithms, and it outperforms other compared algorithms in most cases and shows disadvantage on both convergence and diversity.


2020 ◽  
Vol 77 (2) ◽  
pp. 539-569
Author(s):  
Nicolas Kämmerling ◽  
Jannis Kurtz

Abstract In this work we study binary two-stage robust optimization problems with objective uncertainty. We present an algorithm to calculate efficiently lower bounds for the binary two-stage robust problem by solving alternately the underlying deterministic problem and an adversarial problem. For the deterministic problem any oracle can be used which returns an optimal solution for every possible scenario. We show that the latter lower bound can be implemented in a branch and bound procedure, where the branching is performed only over the first-stage decision variables. All results even hold for non-linear objective functions which are concave in the uncertain parameters. As an alternative solution method we apply a column-and-constraint generation algorithm to the binary two-stage robust problem with objective uncertainty. We test both algorithms on benchmark instances of the uncapacitated single-allocation hub-location problem and of the capital budgeting problem. Our results show that the branch and bound procedure outperforms the column-and-constraint generation algorithm.


Author(s):  
Amir Ardestani-Jaafari ◽  
Erick Delage

In this article, we discuss an alternative method for deriving conservative approximation models for two-stage robust optimization problems. The method mainly relies on a linearization scheme employed in bilinear programming; therefore, we will say that it gives rise to the linearized robust counterpart models. We identify a close relation between this linearized robust counterpart model and the popular affinely adjustable robust counterpart model. We also describe methods of modifying both types of models to make these approximations less conservative. These methods are heavily inspired by the use of valid linear and conic inequalities in the linearization process for bilinear models. We finally demonstrate how to employ this new scheme in location-transportation and multi-item newsvendor problems to improve the numerical efficiency and performance guarantees of robust optimization.


Author(s):  
Weilin Nie ◽  
Cheng Wang

Abstract Online learning is a classical algorithm for optimization problems. Due to its low computational cost, it has been widely used in many aspects of machine learning and statistical learning. Its convergence performance depends heavily on the step size. In this paper, a two-stage step size is proposed for the unregularized online learning algorithm, based on reproducing Kernels. Theoretically, we prove that, such an algorithm can achieve a nearly min–max convergence rate, up to some logarithmic term, without any capacity condition.


Sign in / Sign up

Export Citation Format

Share Document