SDP RELAXATIONS FOR QUADRATIC OPTIMIZATION PROBLEMS DERIVED FROM POLYNOMIAL OPTIMIZATION PROBLEMS

2010 ◽  
Vol 27 (01) ◽  
pp. 15-38 ◽  
Author(s):  
MARTIN MEVISSEN ◽  
MASAKAZU KOJIMA

Based on the convergent sequence of SDP relaxations for a multivariate polynomial optimization problem (POP) by Lasserre (2006), Waki et al. (2006) constructed a sequence of sparse SDP relaxations to solve sparse POPs efficiently. Nevertheless, the size of the sparse SDP relaxation is the major obstacle in order to solve POPs of higher degree. This paper proposes an approach to transform general POPs to quadratic optimization problems (QOPs), which allows to reduce the size of the SDP relaxation substantially. We introduce different heuristics resulting in equivalent QOPs and show how sparsity of a POP is maintained under the transformation procedure. As the most important issue, we discuss how to increase the quality of the SDP relaxation for a QOP. Moreover, we increase the accuracy of the solution of the SDP relaxation by applying additional local optimization techniques. Finally, we demonstrate the high potential of this approach through numerical results for large scale POPs of higher degree.

Author(s):  
Krešimir Mihić ◽  
Mingxi Zhu ◽  
Yinyu Ye

Abstract The Alternating Direction Method of Multipliers (ADMM) has gained a lot of attention for solving large-scale and objective-separable constrained optimization. However, the two-block variable structure of the ADMM still limits the practical computational efficiency of the method, because one big matrix factorization is needed at least once even for linear and convex quadratic programming. This drawback may be overcome by enforcing a multi-block structure of the decision variables in the original optimization problem. Unfortunately, the multi-block ADMM, with more than two blocks, is not guaranteed to be convergent. On the other hand, two positive developments have been made: first, if in each cyclic loop one randomly permutes the updating order of the multiple blocks, then the method converges in expectation for solving any system of linear equations with any number of blocks. Secondly, such a randomly permuted ADMM also works for equality-constrained convex quadratic programming even when the objective function is not separable. The goal of this paper is twofold. First, we add more randomness into the ADMM by developing a randomly assembled cyclic ADMM (RAC-ADMM) where the decision variables in each block are randomly assembled. We discuss the theoretical properties of RAC-ADMM and show when random assembling helps and when it hurts, and develop a criterion to guarantee that it converges almost surely. Secondly, using the theoretical guidance on RAC-ADMM, we conduct multiple numerical tests on solving both randomly generated and large-scale benchmark quadratic optimization problems, which include continuous, and binary graph-partition and quadratic assignment, and selected machine learning problems. Our numerical tests show that the RAC-ADMM, with a variable-grouping strategy, could significantly improve the computation efficiency on solving most quadratic optimization problems.


Author(s):  
Oleg Berezovskyi

Introduction. Due to the fact that quadratic extremal problems are generally NP-hard, various convex relaxations to find bounds for their global extrema are used, namely, Lagrangian relaxation, SDP-relaxation, SOCP-relaxation, LP-relaxation, and others. This article investigates a dual bound that results from the Lagrangian relaxation of all constraints of quadratic extremal problem. The main issue when using this approach for solving quadratic extremal problems is the quality of the obtained bounds (the magnitude of the duality gap) and the possibility to improve them. While for quadratic convex optimization problems such bounds are exact, in other cases this issue is rather complicated. In non-convex cases, to improve the dual bounds (to reduce the duality gap) the techniques, based on ambiguity of the problem formulation, can be used. The most common of these techniques is an extension of the original quadratic formulation of the problem by introducing the so-called functionally superfluous constraints (additional constraints that result from available constraints). The ways to construct such constraints can be general in nature or they can use specific features of the concrete problems. The purpose of the article is to propose methods for improving the Lagrange dual bounds for quadratic extremal problems by using technique of functionally superfluous constraints; to present examples of constructing such constraints. Results. The general concept of using functionally superfluous constraints for improving the Lagrange dual bounds for quadratic extremal problems is considered. Methods of constructing such constraints are presented. In particular, the method proposed by N.Z. Shor for constructing functionally superfluous constraints for quadratic problems of general form is presented in generalized and schematized forms. Also it is pointed out that other special techniques, which employ the features of specific problems for constructing functionally superfluous constraints, can be used. Conclusions. In order to improve dual bounds for quadratic extremal problems, one can use various families of functionally superfluous constraints, both of general and specific type. In some cases, their application can improve bounds or even provide an opportunity to obtain exact values of global extrema.


Algorithms ◽  
2020 ◽  
Vol 13 (5) ◽  
pp. 108
Author(s):  
Alexey Vakhnin ◽  
Evgenii Sopov

Many modern real-valued optimization tasks use “black-box” (BB) models for evaluating objective functions and they are high-dimensional and constrained. Using common classifications, we can identify them as constrained large-scale global optimization (cLSGO) tasks. Today, the IEEE Congress of Evolutionary Computation provides a special session and several benchmarks for LSGO. At the same time, cLSGO problems are not well studied yet. The majority of modern optimization techniques demonstrate insufficient performance when confronted with cLSGO tasks. The effectiveness of evolution algorithms (EAs) in solving constrained low-dimensional optimization problems has been proven in many scientific papers and studies. Moreover, the cooperative coevolution (CC) framework has been successfully applied for EA used to solve LSGO problems. In this paper, a new approach for solving cLSGO has been proposed. This approach is based on CC and a method that increases the size of groups of variables at the decomposition stage (iCC) when solving cLSGO tasks. A new algorithm has been proposed, which combined the success-history based parameter adaptation for differential evolution (SHADE) optimizer, iCC, and the ε-constrained method (namely ε-iCC-SHADE). We investigated the performance of the ε-iCC-SHADE and compared it with the previously proposed ε-CC-SHADE algorithm on scalable problems from the IEEE CEC 2017 Competition on constrained real-parameter optimization.


2018 ◽  
Vol 7 (2) ◽  
pp. 39-60
Author(s):  
Kuntal Bhattacharjee

The purpose of this article is to present a backtracking search optimization technique (BSA) to determine the feasible optimum solution of the economic load dispatch (ELD) problems involving different realistic equality and inequality constraints, such as power balance, ramp rate limits, and prohibited operating zone constraints. Effects of valve-point loading, multi-fuel option of large-scale thermal plants, system transmission loss are also taken into consideration for more realistic application. Two effective operations, mutation and crossover, help BSA algorithms to find the global solution for different optimization problems. BSA has the capability to deal with multimodal problems due to its powerful exploration and exploitation capability. BSA is free from sensitive parameter control operations. Simulation results set up the proposed approach in a better stage compared to several other existing optimization techniques in terms quality of solution and computational efficiency. Results also reveal the robustness of the proposed methodology.


2018 ◽  
Vol 61 (1) ◽  
pp. 76-98 ◽  
Author(s):  
TING LI ◽  
ZHONG WAN

We propose a new adaptive and composite Barzilai–Borwein (BB) step size by integrating the advantages of such existing step sizes. Particularly, the proposed step size is an optimal weighted mean of two classical BB step sizes and the weights are updated at each iteration in accordance with the quality of the classical BB step sizes. Combined with the steepest descent direction, the adaptive and composite BB step size is incorporated into the development of an algorithm such that it is efficient to solve large-scale optimization problems. We prove that the developed algorithm is globally convergent and it R-linearly converges when applied to solve strictly convex quadratic minimization problems. Compared with the state-of-the-art algorithms available in the literature, the proposed step size is more efficient in solving ill-posed or large-scale benchmark test problems.


Author(s):  
Marwan Hafez ◽  
Khaled Ksaibati ◽  
Rebecca A. Atadero

Over the last decade, significant progress has been made to customize the maintenance policies of low-volume roads (LVRs) to local needs and available resources. Low-cost treatments and surface repairs are extensively employed to reduce annual maintenance costs. Colorado Department of Transportation (CDOT) uses chip seals and thin overlays as the available treatment options applied to LVRs. However, the effectiveness of these treatments differs depending on the existing condition of pavements. Some surface treatments and light rehabilitations provide only short-term effectiveness. Multi-year optimization techniques can support decision makers with a set of optimal maintenance activities to achieve specific pavement performance targets. This study applies large-scale optimization to compare the current CDOT maintenance policy with an alternative strategy recommended for low-volume paved roads in Colorado. Genetic algorithms were applied in the optimization models because they are capable of resolving the computational complexity of optimization problems in a timely fashion. The optimized maintenance alternatives were comprehensively investigated for a LVR network in Colorado over a specific planning horizon. The specific optimization constraints and limitations prevailing in LVRs are addressed and introduced in the problem formulation of the optimization process. The results of both performance and cost analysis emphasize the effectiveness of the proposed maintenance strategy compared with the existing one. The alternative policy provides much more benefit-cost saving while preserving the overall pavement performance of the network. This approach is expected to be efficient to quantify the mid- and long-term financial impact of different treatment policies applied to LVRs within modest resources.


2021 ◽  
Vol 2 (1) ◽  
pp. 1-29
Author(s):  
Hayato Ushijima-Mwesigwa ◽  
Ruslan Shaydulin ◽  
Christian F. A. Negre ◽  
Susan M. Mniszewski ◽  
Yuri Alexeev ◽  
...  

Emerging quantum processors provide an opportunity to explore new approaches for solving traditional problems in the post Moore’s law supercomputing era. However, the limited number of qubits makes it infeasible to tackle massive real-world datasets directly in the near future, leading to new challenges in utilizing these quantum processors for practical purposes. Hybrid quantum-classical algorithms that leverage both quantum and classical types of devices are considered as one of the main strategies to apply quantum computing to large-scale problems. In this article, we advocate the use of multilevel frameworks for combinatorial optimization as a promising general paradigm for designing hybrid quantum-classical algorithms. To demonstrate this approach, we apply this method to two well-known combinatorial optimization problems, namely, the Graph Partitioning Problem, and the Community Detection Problem. We develop hybrid multilevel solvers with quantum local search on D-Wave’s quantum annealer and IBM’s gate-model based quantum processor. We carry out experiments on graphs that are orders of magnitude larger than the current quantum hardware size, and we observe results comparable to state-of-the-art solvers in terms of quality of the solution. Reproducibility : Our code and data are available at Reference [1].


2021 ◽  
Author(s):  
◽  
Juan Rada-Vilela

<p>Particle Swarm Optimization (PSO) is a metaheuristic where a swarm of particles explores the search space of an optimization problem to find good solutions. However, if the problem is subject to noise, the quality of the resulting solutions significantly deteriorates. The literature has attributed such a deterioration to particles suffering from inaccurate memories and from the incorrect selection of their neighborhood best solutions. For both cases, the incorporation of noise mitigation mechanisms has improved the quality of the results, but the analyses beyond such improvements often fall short of empirical evidence supporting their claims in terms other than the quality of the results. Furthermore, there is not even evidence showing the extent to which inaccurate memories and incorrect selection affect the particles in the swarm. Therefore, the performance of PSO on noisy optimization problems remains largely unexplored. The overall goal of this thesis is to study the effect of noise on PSO beyond the known deterioration of its results in order to develop more efficient noise mitigation mechanisms. Based on the allocation of function evaluations by the noise mitigation mechanisms, we distinguish three groups of PSO algorithms as: single-evaluation, which sacrifice the accuracy of the objective values over performing more iterations; resampling-based, which sacrifice performing more iterations over better estimating the objective values; and hybrids, which merge methods from the previous two. With an empirical approach, we study and analyze the performance of existing and new PSO algorithms from each group on 20 large-scale benchmark functions subject to different levels of multiplicative Gaussian noise. Throughout the search process, we compute a set of 16 population statistics that measure different characteristics of the swarms and provide useful information that we utilize to design better PSO algorithms. Our study identifies and defines deception, blindness and disorientation as three conditions from which particles suffer in noisy optimization problems. The population statistics for different PSO algorithms reveal that particles often suffer from large proportions of deception, blindness and disorientation, and show that reducing these three conditions would lead to better results. The sensitivity of PSO to noisy optimization problems is confirmed and highlights the importance of noise mitigation mechanisms. The population statistics for single-evaluation PSO algorithms show that the commonly used evaporation mechanism produces too much disorientation, leading to divergent behaviour and to the worst results within the group. Two better algorithms are designed, the first utilizes probabilistic updates to reduce disorientation, and the second computes a centroid solution as the neighborhood best solution to reduce deception. The population statistics for resampling-based PSO algorithms show that basic resampling still leads to large proportions of deception and blindness, and its results are the worst within the group. Two better algorithms are designed to reduce deception and blindness. The first provides better estimates of the personal best solutions, and the second provides even better estimates of a few solutions from which the neighborhood best solutions are selected. However, an existing PSO algorithm is the best within the group as it strives to asymptotically minimize deception by sequentially reducing both blindness and disorientation. The population statistics for hybrid PSO algorithms show that they provide the best results thanks to a combined reduction of deception, blindness and disorientation. Amongst the hybrids, we find a promising algorithm whose simplicity, flexibility and quality of results questions the importance of overly complex methods designed to minimize deception. Overall, our research presents a thorough study to design, evaluate and tune PSO algorithms to address optimization problems subject to noise.</p>


2022 ◽  
Author(s):  
Chnoor M. Rahman ◽  
Tarik A. Rashid ◽  
Abeer Alsadoon ◽  
Nebojsa Bacanin ◽  
Polla Fattah ◽  
...  

<p></p><p></p><p>The dragonfly algorithm developed in 2016. It is one of the algorithms used by the researchers to optimize an extensive series of uses and applications in various areas. At times, it offers superior performance compared to the most well-known optimization techniques. However, this algorithm faces several difficulties when it is utilized to enhance complex optimization problems. This work addressed the robustness of the method to solve real-world optimization issues, and its deficiency to improve complex optimization problems. This review paper shows a comprehensive investigation of the dragonfly algorithm in the engineering area. First, an overview of the algorithm is discussed. Besides, we also examined the modifications of the algorithm. The merged forms of this algorithm with different techniques and the modifications that have been done to make the algorithm perform better are addressed. Additionally, a survey on applications in the engineering area that used the dragonfly algorithm is offered. The utilized engineering applications are the applications in the field of mechanical engineering problems, electrical engineering problems, optimal parameters, economic load dispatch, and loss reduction. The algorithm is tested and evaluated against particle swarm optimization algorithm and firefly algorithm. To evaluate the ability of the dragonfly algorithm and other participated algorithms a set of traditional benchmarks (TF1-TF23) were utilized. Moreover, to examine the ability of the algorithm to optimize large scale optimization problems CEC-C2019 benchmarks were utilized. A comparison is made between the algorithm and other metaheuristic techniques to show its ability to enhance various problems. The outcomes of the algorithm from the works that utilized the dragonfly algorithm previously and the outcomes of the benchmark test functions proved that in comparison with participated algorithms (GWO, PSO, and GA), the dragonfly algorithm owns an excellent performance, especially for small to intermediate applications. Moreover, the congestion facts of the technique and some future works are presented. The authors conducted this research to help other researchers who want to study the algorithm and utilize it to optimize engineering problems.</p><p></p><p></p>


2000 ◽  
Vol 8 (3) ◽  
pp. 291-309 ◽  
Author(s):  
Alberto Bertoni ◽  
Marco Carpentieri ◽  
Paola Campadelli ◽  
Giuliano Grossi

In this paper, a genetic model based on the operations of recombination and mutation is studied and applied to combinatorial optimization problems. Results are: The equations of the deterministic dynamics in the thermodynamic limit (infinite populations) are derived and, for a sufficiently small mutation rate, the attractors are characterized; A general approximation algorithm for combinatorial optimization problems is designed. The algorithm is applied to the Max Ek-Sat problem, and the quality of the solution is analyzed. It is proved to be optimal for k≥3 with respect to the worst case analysis; for Max E3-Sat the average case performances are experimentally compared with other optimization techniques.


Sign in / Sign up

Export Citation Format

Share Document