Computing Feasible Points of Bilevel Problems with a Penalty Alternating Direction Method

Author(s):  
Thomas Kleinert ◽  
Martin Schmidt

Bilevel problems are highly challenging optimization problems that appear in many applications of energy market design, critical infrastructure defense, transportation, pricing, and so on. Often these bilevel models are equipped with integer decisions, which makes the problems even harder to solve. Typically, in such a setting in mathematical optimization, one develops primal heuristics in order to obtain feasible points of good quality quickly or to enhance the search process of exact global methods. However, there are comparably few heuristics for bilevel problems. In this paper, we develop such a primal heuristic for bilevel problems with a mixed-integer linear or quadratic upper level and a linear or quadratic lower level. The heuristic is based on a penalty alternating direction method, which allows for a theoretical analysis. We derive a convergence theory stating that the method converges to a stationary point of an equivalent single-level reformulation of the bilevel problem and extensively test the method on a test set of more than 2,800 instances—which is one of the largest computational test sets ever used in bilevel programming. The study illustrates the very good performance of the proposed method in terms of both running times and solution quality. This renders the method a suitable subroutine in global bilevel solvers as well as a reasonable standalone approach. Summary of Contribution: Bilevel optimization problems form a very important class of optimization problems in the field of operations research, which is mainly due to their capability of modeling hierarchical decision processes. However, real-world bilevel problems are usually very hard to solve—especially in the case in which additional mixed-integer aspects are included in the modeling. Hence, the development of fast and reliable primal heuristics for this class of problems is very important. This paper presents such a method.

Author(s):  
Krešimir Mihić ◽  
Mingxi Zhu ◽  
Yinyu Ye

Abstract The Alternating Direction Method of Multipliers (ADMM) has gained a lot of attention for solving large-scale and objective-separable constrained optimization. However, the two-block variable structure of the ADMM still limits the practical computational efficiency of the method, because one big matrix factorization is needed at least once even for linear and convex quadratic programming. This drawback may be overcome by enforcing a multi-block structure of the decision variables in the original optimization problem. Unfortunately, the multi-block ADMM, with more than two blocks, is not guaranteed to be convergent. On the other hand, two positive developments have been made: first, if in each cyclic loop one randomly permutes the updating order of the multiple blocks, then the method converges in expectation for solving any system of linear equations with any number of blocks. Secondly, such a randomly permuted ADMM also works for equality-constrained convex quadratic programming even when the objective function is not separable. The goal of this paper is twofold. First, we add more randomness into the ADMM by developing a randomly assembled cyclic ADMM (RAC-ADMM) where the decision variables in each block are randomly assembled. We discuss the theoretical properties of RAC-ADMM and show when random assembling helps and when it hurts, and develop a criterion to guarantee that it converges almost surely. Secondly, using the theoretical guidance on RAC-ADMM, we conduct multiple numerical tests on solving both randomly generated and large-scale benchmark quadratic optimization problems, which include continuous, and binary graph-partition and quadratic assignment, and selected machine learning problems. Our numerical tests show that the RAC-ADMM, with a variable-grouping strategy, could significantly improve the computation efficiency on solving most quadratic optimization problems.


2021 ◽  
Author(s):  
Miantao Chao ◽  
Liqun Liu

Abstract In this paper, we propose a dynamic alternating direction method of multipliers for two-block separable optimization problems. The well-known classical ADMM can be obtained after the time discretization of the dynamical system. Under suitable condition, we prove that the trajectory asymptotically converges to a saddle point of the Lagrangian function of the problems. When the coefficient matrices in the constraint are identiy matrices, we prove the worst-case O(1/t) convergence rate in ergodic sense.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Miantao Chao ◽  
Zhao Deng ◽  
Jinbao Jian

The alternating direction method of multipliers (ADMM) is an effective method for solving two-block separable convex problems and its convergence is well understood. When either the involved number of blocks is more than two, or there is a nonconvex function, or there is a nonseparable structure, ADMM or its directly extend version may not converge. In this paper, we proposed an ADMM-based algorithm for nonconvex multiblock optimization problems with a nonseparable structure. We show that any cluster point of the iterative sequence generated by the proposed algorithm is a critical point, under mild condition. Furthermore, we establish the strong convergence of the whole sequence, under the condition that the potential function satisfies the Kurdyka–Łojasiewicz property. This provides the theoretical basis for the application of the proposed ADMM in the practice. Finally, we give some preliminary numerical results to show the effectiveness of the proposed algorithm.


2020 ◽  
Vol 26 ◽  
pp. 32 ◽  
Author(s):  
Paul Manns ◽  
Christian Kirches

Partial outer convexification is a relaxation technique for MIOCPs being constrained by time-dependent differential equations. Sum-Up-Rounding algorithms allow to approximate feasible points of the relaxed, convexified continuous problem with binary ones that are feasible up to an arbitrarily smallδ> 0. We show that this approximation property holds for ODEs and semilinear PDEs under mild regularity assumptions on the nonlinearity and the solution trajectory of the PDE. In particular, requirements of differentiability and uniformly bounded derivatives on the involved functions from previous work are not necessary to show convergence of the method.


Sign in / Sign up

Export Citation Format

Share Document