scholarly journals Interplay of non-convex quadratically constrained problems with adjustable robust optimization

Author(s):  
Immanuel Bomze ◽  
Markus Gabl

Abstract In this paper we explore convex reformulation strategies for non-convex quadratically constrained optimization problems (QCQPs). First we investigate such reformulations using Pataki’s rank theorem iteratively. We show that the result can be used in conjunction with conic optimization duality in order to obtain a geometric condition for the S-procedure to be exact. Based upon known results on the S-procedure, this approach allows for some insight into the geometry of the joint numerical range of the quadratic forms. Then we investigate a reformulation strategy introduced in recent literature for bilinear optimization problems which is based on adjustable robust optimization theory. We show that, via a similar strategy, one can leverage exact reformulation results of QCQPs in order to derive lower bounds for more complicated quadratic optimization problems. Finally, we investigate the use of reformulation strategies in order to derive characterizations of set-copositive matrix cones. Empirical evidence based upon first numerical experiments shows encouraging results.

4OR ◽  
2021 ◽  
Author(s):  
Gerhard J. Woeginger

AbstractWe survey optimization problems that allow natural simple formulations with one existential and one universal quantifier. We summarize the theoretical background from computational complexity theory, and we present a multitude of illustrating examples. We discuss the connections to robust optimization and to bilevel optimization, and we explain the reasons why the operational research community should be interested in the theoretical aspects of this area.


2020 ◽  
Vol 77 (2) ◽  
pp. 539-569
Author(s):  
Nicolas Kämmerling ◽  
Jannis Kurtz

Abstract In this work we study binary two-stage robust optimization problems with objective uncertainty. We present an algorithm to calculate efficiently lower bounds for the binary two-stage robust problem by solving alternately the underlying deterministic problem and an adversarial problem. For the deterministic problem any oracle can be used which returns an optimal solution for every possible scenario. We show that the latter lower bound can be implemented in a branch and bound procedure, where the branching is performed only over the first-stage decision variables. All results even hold for non-linear objective functions which are concave in the uncertain parameters. As an alternative solution method we apply a column-and-constraint generation algorithm to the binary two-stage robust problem with objective uncertainty. We test both algorithms on benchmark instances of the uncapacitated single-allocation hub-location problem and of the capital budgeting problem. Our results show that the branch and bound procedure outperforms the column-and-constraint generation algorithm.


Author(s):  
Amir Ardestani-Jaafari ◽  
Erick Delage

In this article, we discuss an alternative method for deriving conservative approximation models for two-stage robust optimization problems. The method mainly relies on a linearization scheme employed in bilinear programming; therefore, we will say that it gives rise to the linearized robust counterpart models. We identify a close relation between this linearized robust counterpart model and the popular affinely adjustable robust counterpart model. We also describe methods of modifying both types of models to make these approximations less conservative. These methods are heavily inspired by the use of valid linear and conic inequalities in the linearization process for bilinear models. We finally demonstrate how to employ this new scheme in location-transportation and multi-item newsvendor problems to improve the numerical efficiency and performance guarantees of robust optimization.


2017 ◽  
Vol 27 (2) ◽  
pp. 1075-1101 ◽  
Author(s):  
N. Dinh ◽  
T. H. Mo ◽  
G. Vallet ◽  
M. Volle

2013 ◽  
Vol 464 ◽  
pp. 352-357
Author(s):  
Pasura Aungkulanon

The engineering optimization problems are large and complex. Effective methods for solving these problems using a finite sequence of instructions can be categorized into optimization and meta-heuristics algorithms. Meta-heuristics techniques have been proved to solve various real world problems. In this study, a comparison of two meta-heuristic techniques, namely, Global-Best Harmony Search algorithm (GHSA) and Bat algorithm (BATA), for solving constrained optimization problems was carried out. GHSA and BATA are optimization algorithms inspired by the structure of harmony improvisation search process and social behavior of bat echolocation for decision direction. These algorithms were implemented under different natures of three optimization, which are single-peak, multi-peak and curved-ridge response surfaces. Moreover, both algorithms were also applied to constrained engineering problems. The results from non-linear continuous unconstrained functions in the context of response surface methodology and constrained problems can be shown that Bat algorithm seems to be better in terms of the sample mean and variance of design points yields and computation time.


2021 ◽  
Vol Volume 2 (Original research articles) ◽  
Author(s):  
Matúš Benko ◽  
Patrick Mehlitz

Implicit variables of a mathematical program are variables which do not need to be optimized but are used to model feasibility conditions. They frequently appear in several different problem classes of optimization theory comprising bilevel programming, evaluated multiobjective optimization, or nonlinear optimization problems with slack variables. In order to deal with implicit variables, they are often interpreted as explicit ones. Here, we first point out that this is a light-headed approach which induces artificial locally optimal solutions. Afterwards, we derive various Mordukhovich-stationarity-type necessary optimality conditions which correspond to treating the implicit variables as explicit ones on the one hand, or using them only implicitly to model the constraints on the other. A detailed comparison of the obtained stationarity conditions as well as the associated underlying constraint qualifications will be provided. Overall, we proceed in a fairly general setting relying on modern tools of variational analysis. Finally, we apply our findings to different well-known problem classes of mathematical optimization in order to visualize the obtained theory. Comment: 34 pages


Author(s):  
Weijun Wang ◽  
Stéphane Caro ◽  
Fouad Bennis ◽  
Oscar Brito Augusto

For Multi-Objective Robust Optimization Problem (MOROP), it is important to obtain design solutions that are both optimal and robust. To find these solutions, usually, the designer need to set a threshold of the variation of Performance Functions (PFs) before optimization, or add the effects of uncertainties on the original PFs to generate a new Pareto robust front. In this paper, we divide a MOROP into two Multi-Objective Optimization Problems (MOOPs). One is the original MOOP, another one is that we take the Robustness Functions (RFs), robust counterparts of the original PFs, as optimization objectives. After solving these two MOOPs separately, two sets of solutions come out, namely the Pareto Performance Solutions (PP) and the Pareto Robustness Solutions (PR). Make a further development on these two sets, we can get two types of solutions, namely the Pareto Robustness Solutions among the Pareto Performance Solutions (PR(PP)), and the Pareto Performance Solutions among the Pareto Robustness Solutions (PP(PR)). Further more, the intersection of PR(PP) and PP(PR) can represent the intersection of PR and PP well. Then the designer can choose good solutions by comparing the results of PR(PP) and PP(PR). Thanks to this method, we can find out the optimal and robust solutions without setting the threshold of the variation of PFs nor losing the initial Pareto front. Finally, an illustrative example highlights the contributions of the paper.


2015 ◽  
Vol 137 (1) ◽  
Author(s):  
Weijun Wang ◽  
Stéphane Caro ◽  
Fouad Bennis ◽  
Ricardo Soto ◽  
Broderick Crawford

Toward a multi-objective optimization robust problem, the variations in design variables (DVs) and design environment parameters (DEPs) include the small variations and the large variations. The former have small effect on the performance functions and/or the constraints, and the latter refer to the ones that have large effect on the performance functions and/or the constraints. The robustness of performance functions is discussed in this paper. A postoptimality sensitivity analysis technique for multi-objective robust optimization problems (MOROPs) is discussed, and two robustness indices (RIs) are introduced. The first one considers the robustness of the performance functions to small variations in the DVs and the DEPs. The second RI characterizes the robustness of the performance functions to large variations in the DEPs. It is based on the ability of a solution to maintain a good Pareto ranking for different DEPs due to large variations. The robustness of the solutions is treated as vectors in the robustness function space (RF-Space), which is defined by the two proposed RIs. As a result, the designer can compare the robustness of all Pareto optimal solutions and make a decision. Finally, two illustrative examples are given to highlight the contributions of this paper. The first example is about a numerical problem, whereas the second problem deals with the multi-objective robust optimization design of a floating wind turbine.


Sign in / Sign up

Export Citation Format

Share Document