Convex underestimating relaxation techniques for nonconvex polynomial programming problems: computational overview

2015 ◽  
Vol 24 (3-4) ◽  
pp. 129-143
Author(s):  
André A. Keller

AbstractThis paper introduces constructing convex-relaxed programs for nonconvex optimization problems. Branch-and-bound algorithms are convex-relaxation-based techniques. The convex envelopes are important, as they represent the uniformly best convex underestimators for nonconvex polynomials over some region. The reformulation-linearization technique (RLT) generates linear programming (LP) relaxations of a quadratic problem. RLT operates in two steps: a reformulation step and a linearization (or convexification) step. In the reformulation phase, the constraint and bound inequalities are replaced by new numerous pairwise products of the constraints. In the linearization phase, each distinct quadratic term is replaced by a single new RLT variable. This RLT process produces an LP relaxation. The LP-RLT yieds a lower bound on the global minimum. LMI formulations (linear matrix inequalities) have been proposed to treat efficiently with nonconvex sets. An LMI is equivalent to a system of polynomial inequalities. A semialgebraic convex set describes the system. The feasible sets are spectrahedra with curved faces, contrary to the LP case with polyhedra. Successive LMI relaxations of increasing size yield the global optimum. Nonlinear inequalities are converted to an LMI form using Schur complements. Optimizing a nonconvex polynomial is equivalent to the LP over a convex set. Engineering application interests include system analysis, control theory, combinatorial optimization, statistics, and structural design optimization.

The real-world engineering optimization problems utilize complex computational methods like finite element frameworks. These approaches are computationally costly and need high solution time. The work pays attention on finding the optimal solution to these complex engineering problems by using Surrogate Models (SM). SMs are mathematical models, which are utilized to minimize the required number of such costly function evaluations at the time of the optimization cycles. Instead of optimizing the Design Space (DS) as a whole, subregion based strategies are found to be effectual, especially in the cases where prior knowledge of optimal solution is unavailable. In the present work, a surrogate centered optimization scheme is presented for local search, which dynamically sub-divides the DS into an optimum number of sub-regions by choosing the best cluster evaluation techniques as followed by the selection of best mixture SMs for each optimization cycle. For all objective functions and constraint functions in every sub-region, the mixture SMs are created by a combination of two or more single SMs. The MATSuMoTo, the Matlab based SM Toolbox by Juliane Muller and Robert Piché has been adapted for the creation and selection of best mixture SM. In this method, an individual surrogate is combined by utilizing the DempsterShafer theory (DST). Besides the above local search, a global search module is also introduced for ensuring faster convergence. This approach is tested on a constrained optimization benchmark test problem with smaller, disconnected feasible regions. It is perceived that the proposed algorithm accurately located all the local and global optima points with minimum function evaluations. The approach is applied to engineering problems like optimization of Machine Tool Spindle (MTS) design and frontal crash simulation on a full car body. For these engineering application problems also, mixture SMbased sub-region based search strategy is utilized to attain most accurate global optimum solution with a minimal number of costly function evaluations.


2020 ◽  
Author(s):  
Alberto Bemporad ◽  
Dario Piga

AbstractThis paper proposes a method for solving optimization problems in which the decision-maker cannot evaluate the objective function, but rather can only express a preference such as “this is better than that” between two candidate decision vectors. The algorithm described in this paper aims at reaching the global optimizer by iteratively proposing the decision maker a new comparison to make, based on actively learning a surrogate of the latent (unknown and perhaps unquantifiable) objective function from past sampled decision vectors and pairwise preferences. A radial-basis function surrogate is fit via linear or quadratic programming, satisfying if possible the preferences expressed by the decision maker on existing samples. The surrogate is used to propose a new sample of the decision vector for comparison with the current best candidate based on two possible criteria: minimize a combination of the surrogate and an inverse weighting distance function to balance between exploitation of the surrogate and exploration of the decision space, or maximize a function related to the probability that the new candidate will be preferred. Compared to active preference learning based on Bayesian optimization, we show that our approach is competitive in that, within the same number of comparisons, it usually approaches the global optimum more closely and is computationally lighter. Applications of the proposed algorithm to solve a set of benchmark global optimization problems, for multi-objective optimization, and for optimal tuning of a cost-sensitive neural network classifier for object recognition from images are described in the paper. MATLAB and a Python implementations of the algorithms described in the paper are available at http://cse.lab.imtlucca.it/~bemporad/glis.


2021 ◽  
Vol 12 (4) ◽  
pp. 98-116
Author(s):  
Noureddine Boukhari ◽  
Fatima Debbat ◽  
Nicolas Monmarché ◽  
Mohamed Slimane

Evolution strategies (ES) are a family of strong stochastic methods for global optimization and have proved their capability in avoiding local optima more than other optimization methods. Many researchers have investigated different versions of the original evolution strategy with good results in a variety of optimization problems. However, the convergence rate of the algorithm to the global optimum stays asymptotic. In order to accelerate the convergence rate, a hybrid approach is proposed using the nonlinear simplex method (Nelder-Mead) and an adaptive scheme to control the local search application, and the authors demonstrate that such combination yields significantly better convergence. The new proposed method has been tested on 15 complex benchmark functions and applied to the bi-objective portfolio optimization problem and compared with other state-of-the-art techniques. Experimental results show that the performance is improved by this hybridization in terms of solution eminence and strong convergence.


2021 ◽  
Author(s):  
Zuanjia Xie ◽  
Chunliang Zhang ◽  
Haibin Ouyang ◽  
Steven Li ◽  
Liqun Gao

Abstract Jaya algorithm is an advanced optimization algorithm, which has been applied to many real-world optimization problems. Jaya algorithm has better performance in some optimization field. However, Jaya algorithm exploration capability is not better. In order to enhance exploration capability of the Jaya algorithm, a self-adaptively commensal learning-based Jaya algorithm with multi-populations (Jaya-SCLMP) is presented in this paper. In Jaya-SCLMP, a commensal learning strategy is used to increase the probability of finding the global optimum, in which the person history best and worst information is used to explore new solution area. Moreover, a multi-populations strategy based on Gaussian distribution scheme and learning dictionary is utilized to enhance the exploration capability, meanwhile every sub-population employed three Gaussian distributions at each generation, roulette wheel selection is employed to choose a scheme based on learning dictionary. The performance of Jaya-SCLMP is evaluated based on 28 CEC 2013 unconstrained benchmark problems. In addition, three reliability problems, i.e. complex (bridge) system, series system and series-parallel system are selected. Compared with several Jaya variants and several state-of-the-art other algorithms, the experimental results reveal that Jaya-SCLMP is effective.


2016 ◽  
pp. 450-475
Author(s):  
Dipti Singh ◽  
Kusum Deep

Due to their wide applicability and easy implementation, Genetic algorithms (GAs) are preferred to solve many optimization problems over other techniques. When a local search (LS) has been included in Genetic algorithms, it is known as Memetic algorithms. In this chapter, a new variant of single-meme Memetic Algorithm is proposed to improve the efficiency of GA. Though GAs are efficient at finding the global optimum solution of nonlinear optimization problems but usually converge slow and sometimes arrive at premature convergence. On the other hand, LS algorithms are fast but are poor global searchers. To exploit the good qualities of both techniques, they are combined in a way that maximum benefits of both the approaches are reaped. It lets the population of individuals evolve using GA and then applies LS to get the optimal solution. To validate our claims, it is tested on five benchmark problems of dimension 10, 30 and 50 and a comparison between GA and MA has been made.


2020 ◽  
Vol 48 (4) ◽  
pp. 633-659
Author(s):  
Daniel Bankmann ◽  
Volker Mehrmann ◽  
Yurii Nesterov ◽  
Paul Van Dooren

AbstractIn this paper formulas are derived for the analytic center of the solution set of linear matrix inequalities (LMIs) defining passive transfer functions. The algebraic Riccati equations that are usually associated with such systems are related to boundary points of the convex set defined by the solution set of the LMI. It is shown that the analytic center is described by closely related matrix equations, and their properties are analyzed for continuous- and discrete-time systems. Numerical methods are derived to solve these equations via steepest descent and Newton methods. It is also shown that the analytic center has nice robustness properties when it is used to represent passive systems. The results are illustrated by numerical examples.


Author(s):  
T. Yu

Modularity is widely used in system analysis and design such as complex engineering products and their organization, and modularity is also the key to solving optimization problems efficiently via problem decomposition. We first discover modularity in a system and then leverage this knowledge to improve the performance of the system. In this chapter, we tackle both problems with the alliance of organizational theory and evolutionary computation. First, we cluster the dependency structure matrix (DSM) of a system using a simple genetic algorithm (GA) and an information theoretic-based metric. Then we design a better GA through the decomposition of the optimization problem using the proposed DSM clustering method.


2020 ◽  
Vol 10 (17) ◽  
pp. 5859
Author(s):  
Josep Rubió-Massegú ◽  
Francisco Palacios-Quiñonero ◽  
Josep M. Rossell ◽  
Hamid Reza Karimi

In vibration control of compound structures, inter-substructure damper (ISSD) systems exploit the out-of-phase response of different substructures to dissipate the kinetic vibrational energy by means of inter-substructure damping links. For seismic protection of multistory buildings, distributed sets of interstory fluid viscous dampers (FVDs) are ISSD systems of particular interest. The connections between distributed FVD systems and decentralized static output-feedback control allow using advanced controller-design methodologies to obtain passive ISSD systems with high-performance characteristics. A major issue of that approach is the computational difficulties associated to the numerical solution of optimization problems with structured bilinear matrix inequality constraints. In this work, we present a novel iterative linear matrix inequality procedure that can be applied to obtain enhanced suboptimal solutions for that kind of optimization problems. To demonstrate the effectiveness of the proposed methodology, we design a system of supplementary interstory FVDs for the seismic protection of a five-story building by synthesizing a decentralized static velocity-feedback H∞ controller. In the performance assessment, we compare the frequency-domain and time-domain responses of the designed FVD system with the behavior of the optimal static state-feedback H∞ controller. The obtained results indicate that the proposed approach allows designing passive ISSD systems that are capable to match the level of performance attained by optimal state-feedback active controllers.


Sign in / Sign up

Export Citation Format

Share Document