optimization solvers
Recently Published Documents


TOTAL DOCUMENTS

40
(FIVE YEARS 18)

H-INDEX

6
(FIVE YEARS 2)

Author(s):  
Ali Hakan Tor

The aim of this study is to compare the performance of smooth and nonsmooth optimization solvers from HANSO (Hybrid Algorithm for Nonsmooth Optimization) software. The smooth optimization solver is the implementation of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method and the nonsmooth optimization solver is the Hybrid Algorithm for Nonsmooth Optimization. More precisely, the nonsmooth optimization algorithm is the combination of the BFGS and the Gradient Sampling Algorithm (GSA). We use well-known collection of academic test problems for nonsmooth optimization containing both convex and nonconvex problems. The motivation for this research is the importance of the comparative assessment of smooth optimization methods for solving nonsmooth optimization problems. This assessment will demonstrate how successful is the BFGS method for solving nonsmooth optimization problems in comparison with the nonsmooth optimization solver from HANSO. Performance profiles using the number iterations, the number of function evaluations and the number of subgradient evaluations are used to compare solvers.


Author(s):  
Benjamin Müller ◽  
Gonzalo Muñoz ◽  
Maxime Gasse ◽  
Ambros Gleixner ◽  
Andrea Lodi ◽  
...  

AbstractThe most important ingredient for solving mixed-integer nonlinear programs (MINLPs) to global $$\epsilon $$ ϵ -optimality with spatial branch and bound is a tight, computationally tractable relaxation. Due to both theoretical and practical considerations, relaxations of MINLPs are usually required to be convex. Nonetheless, current optimization solvers can often successfully handle a moderate presence of nonconvexities, which opens the door for the use of potentially tighter nonconvex relaxations. In this work, we exploit this fact and make use of a nonconvex relaxation obtained via aggregation of constraints: a surrogate relaxation. These relaxations were actively studied for linear integer programs in the 70s and 80s, but they have been scarcely considered since. We revisit these relaxations in an MINLP setting and show the computational benefits and challenges they can have. Additionally, we study a generalization of such relaxation that allows for multiple aggregations simultaneously and present the first algorithm that is capable of computing the best set of aggregations. We propose a multitude of computational enhancements for improving its practical performance and evaluate the algorithm’s ability to generate strong dual bounds through extensive computational experiments.


2021 ◽  
Vol 12 (3) ◽  
pp. 172-187
Author(s):  
Heng Xiao ◽  
Yokoya ◽  
Toshiharu Hatanaka

In recent years, evolutionary multitasking has received attention in the evolutionary computation community. As an evolutionary multifactorial optimization method, multifactorial evolutionary algorithm (MFEA) is proposed to realize evolutionary multitasking. One concept called the skill factor is introduced to assign a preferred task for each individual in MFEA. Then, based on the skill factor, there are some multifactorial optimization solvers including swarm intelligence that have been developed. In this paper, a PSO-FA hybrid model with a model selection mechanism triggered by updating the personal best memory is applied to multifactorial optimization. The skill factor reassignment is introduced in this model to enhance the search capability of the hybrid swarm model. Then numerical experiments are carried out by using nine benchmark problems based on typical multitask situations and by comparing with a simple multifactorial PSO to show the effectiveness of the proposed method.


Author(s):  
Cheng Seong Khor

The chapter focuses on the recent advancements in commercial integer optimization solvers as exemplified by the CPLEX software package particularly but not limited to mixed-integer linear programming (MILP) models applied to business intelligence applications. We provide background on the main underlying algorithmic method of branch-and-cut, which is based on the established optimization solution methods of branch-and-bound and cutting planes. The chapter also covers heuristic-based algorithms, which include preprocessing and probing strategies as well as the more advanced methods of local or neighborhood search for polishing solutions toward enhanced use in practical settings. Emphasis is given to both theory and implementation of the methods available. Other considerations are offered on parallelization, solution pools, and tuning tools, culminating with some concluding remarks on computational performance vis-à-vis business intelligence applications with a view toward perspective for future work in this area.


Author(s):  
Cheng Lu ◽  
Dorit S. Hochbaum

AbstractWe study a 1-dimensional discrete signal denoising problem that consists of minimizing a sum of separable convex fidelity terms and convex regularization terms, the latter penalize the differences of adjacent signal values. This problem generalizes the total variation regularization problem. We provide here a unified approach to solve the problem for general convex fidelity and regularization functions that is based on the Karush–Kuhn–Tucker optimality conditions. This approach is shown here to lead to a fast algorithm for the problem with general convex fidelity and regularization functions, and a faster algorithm if, in addition, the fidelity functions are differentiable and the regularization functions are strictly convex. Both algorithms achieve the best theoretical worst case complexity over existing algorithms for the classes of objective functions studied here. Also in practice, our C++ implementation of the method is considerably faster than popular C++ nonlinear optimization solvers for the problem.


Author(s):  
Florian Schwendinger ◽  
Bettina Grün ◽  
Kurt Hornik

AbstractRelative risks are estimated to assess associations and effects due to their ease of interpretability, e.g., in epidemiological studies. Fitting log-binomial regression models allows to use the estimated regression coefficients to directly infer the relative risks. The estimation of these models, however, is complicated because of the constraints which have to be imposed on the parameter space. In this paper we systematically compare different optimization algorithms to obtain the maximum likelihood estimates for the regression coefficients in log-binomial regression. We first establish under which conditions the maximum likelihood estimates are guaranteed to be finite and unique, which allows to identify and exclude problematic cases. In simulation studies using artificial data we compare the performance of different optimizers including solvers based on the augmented Lagrangian method, interior-point methods including a conic optimizer, majorize-minimize algorithms, iteratively reweighted least squares and expectation-maximization algorithm variants. We demonstrate that conic optimizers emerge as the preferred choice due to their reliability, lack of requirement to tune hyperparameters and speed.


Author(s):  
Ahmed Osman ◽  
Assim Sagahyroon ◽  
Raafat Aburukba ◽  
Fadi Aloul

Cloud computing has emerged as a practical paradigm for providing IT resources, infrastructure and services. This has led to the establishment of datacenters that have substantial energy demands for their operation. This work investigates the optimization of energy consumption in cloud datacenter using energy efficient allocation of tasks to resources. The work seeks to develop formal optimization models that minimize the energy consumption of computational resources and evaluates the use of existing optimization solvers in testing these models. Integer linear programming (ILP) techniques are used to model the scheduling problem. The objective is to minimize the total power consumed by the active and idle cores of the servers’ CPUs while meeting a set of constraints. Next, we use these models to carry out a detailed performance comparison between a selected set of Generic ILP and 0-1 Boolean satisfiability based solvers in solving the ILP formulations. Simulation results indicate that in some cases the developed models have saved up to 38% in energy consumption when compared to common techniques such as round robin. Furthermore, results also showed that generic ILP solvers had superior performance when compared to SAT-based ILP solvers especially as the number of tasks and resources grow in size.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Darius Bunandar ◽  
Luke C. G. Govia ◽  
Hari Krovi ◽  
Dirk Englund

AbstractQuantum key distribution (QKD) allows for secure communications safe against attacks by quantum computers. QKD protocols are performed by sending a sizeable, but finite, number of quantum signals between the distant parties involved. Many QKD experiments, however, predict their achievable key rates using asymptotic formulas, which assume the transmission of an infinite number of signals, partly because QKD proofs with finite transmissions (and finite-key lengths) can be difficult. Here we develop a robust numerical approach for calculating the key rates for QKD protocols in the finite-key regime in terms of two semi-definite programs (SDPs). The first uses the relation between conditional smooth min-entropy and quantum relative entropy through the quantum asymptotic equipartition property, and the second uses the relation between the smooth min-entropy and quantum fidelity. The numerical programs are formulated under the assumption of collective attacks from the eavesdropper and can be promoted to withstand coherent attacks using the postselection technique. We then solve these SDPs using convex optimization solvers and obtain numerical calculations of finite-key rates for several protocols difficult to analyze analytically, such as BB84 with unequal detector efficiencies, B92, and twin-field QKD. Our numerical approach democratizes the composable security proofs for QKD protocols where the derived keys can be used as an input to another cryptosystem.


10.29007/f4vs ◽  
2020 ◽  
Author(s):  
Johan Lidén Eddeland ◽  
Sajed Miremadi ◽  
Knut Åkesson

Temporal-logic based falsification of Cyber-Physical Systems is a testing technique used to verify certain behaviours in simulation models, however the problem statement typically requires some model-specific tuning of parameters to achieve optimal results. In this experience report, we investigate how different optimization solvers and objective functions affect the falsification outcome for a benchmark set of models and specifications. With data from the four different solvers and three different objective functions for the falsification problem, we see that choice of solver and objective function depends both on the model and the specification that are to be falsified. We also note that using a robust semantics of Signal Temporal Logic typically increases falsification performance compared to using Boolean semantics.


Sign in / Sign up

Export Citation Format

Share Document