Mathematical Programming Computation
Latest Publications


TOTAL DOCUMENTS

208
(FIVE YEARS 58)

H-INDEX

31
(FIVE YEARS 3)

Published By Springer-Verlag

1867-2957, 1867-2949

Author(s):  
Morteza Kimiaei ◽  
Arnold Neumaier ◽  
Behzad Azmi

AbstractRecently, Neumaier and Azmi gave a comprehensive convergence theory for a generic algorithm for bound constrained optimization problems with a continuously differentiable objective function. The algorithm combines an active set strategy with a gradient-free line search along a piecewise linear search path defined by directions chosen to reduce zigzagging. This paper describes , an efficient implementation of this scheme. It employs new limited memory techniques for computing the search directions, improves by adding various safeguards relevant when finite precision arithmetic is used, and adds many practical enhancements in other details. The paper compares and several other solvers on the unconstrained and bound constrained problems from the collection and makes recommendations on which solver to use and when. Depending on the problem class, the problem dimension, and the precise goal, the best solvers are , , and .


Author(s):  
Alberto Ceselli ◽  
Lucas Létocart ◽  
Emiliano Traversi

Author(s):  
Gregor Hendel

AbstractLarge Neighborhood Search (LNS) heuristics are among the most powerful but also most expensive heuristics for mixed integer programs (MIP). Ideally, a solver adaptively concentrates its limited computational budget by learning which LNS heuristics work best for the MIP problem at hand. To this end, this work introduces Adaptive Large Neighborhood Search (ALNS) for MIP, a primal heuristic that acts as a framework for eight popular LNS heuristics such as Local Branching and Relaxation Induced Neighborhood Search (RINS). We distinguish the available LNS heuristics by their individual search spaces, which we call auxiliary problems. The decision which auxiliary problem should be executed is guided by selection strategies for the multi armed bandit problem, a related optimization problem during which suitable actions have to be chosen to maximize a reward function. In this paper, we propose an LNS-specific reward function to learn to distinguish between the available auxiliary problems based on successful calls and failures. A second, algorithmic enhancement is a generic variable fixing prioritization, which ALNS employs to adjust the subproblem complexity as needed. This is particularly useful for some LNS problems which do not fix variables by themselves. The proposed primal heuristic has been implemented within the MIP solver SCIP. An extensive computational study is conducted to compare different LNS strategies within our ALNS framework on a large set of publicly available MIP instances from the MIPLIB and Coral benchmark sets. The results of this simulation are used to calibrate the parameters of the bandit selection strategies. A second computational experiment shows the computational benefits of the proposed ALNS framework within the MIP solver SCIP.


Author(s):  
Robin Verschueren ◽  
Gianluca Frison ◽  
Dimitris Kouzoupis ◽  
Jonathan Frey ◽  
Niels van Duijkeren ◽  
...  

Author(s):  
Artur M. Schweidtmann ◽  
Dominik Bongartz ◽  
Daniel Grothe ◽  
Tim Kerkenhoff ◽  
Xiaopeng Lin ◽  
...  

AbstractGaussian processes (Kriging) are interpolating data-driven models that are frequently applied in various disciplines. Often, Gaussian processes are trained on datasets and are subsequently embedded as surrogate models in optimization problems. These optimization problems are nonconvex and global optimization is desired. However, previous literature observed computational burdens limiting deterministic global optimization to Gaussian processes trained on few data points. We propose a reduced-space formulation for deterministic global optimization with trained Gaussian processes embedded. For optimization, the branch-and-bound solver branches only on the free variables and McCormick relaxations are propagated through explicit Gaussian process models. The approach also leads to significantly smaller and computationally cheaper subproblems for lower and upper bounding. To further accelerate convergence, we derive envelopes of common covariance functions for GPs and tight relaxations of acquisition functions used in Bayesian optimization including expected improvement, probability of improvement, and lower confidence bound. In total, we reduce computational time by orders of magnitude compared to state-of-the-art methods, thus overcoming previous computational burdens. We demonstrate the performance and scaling of the proposed method and apply it to Bayesian optimization with global optimization of the acquisition function and chance-constrained programming. The Gaussian process models, acquisition functions, and training scripts are available open-source within the “MeLOn—MachineLearning Models for Optimization” toolbox (https://git.rwth-aachen.de/avt.svt/public/MeLOn).


Author(s):  
Marc E. Pfetsch ◽  
Giovanni Rinaldi ◽  
Paolo Ventura

AbstractWe study a variant of the weighted consecutive ones property problem. Here, a 0/1-matrix is given with a cost associated to each of its entries and one has to find a minimum cost set of zero entries to be turned to ones in order to make the matrix have the consecutive ones property for rows. We investigate polyhedral and combinatorial properties of the problem and we exploit them in a branch-and-cut algorithm. In particular, we devise preprocessing rules and investigate variants of “local cuts”. We test the resulting algorithm on a number of instances, and we report on these computational experiments.


Sign in / Sign up

Export Citation Format

Share Document