Global Optimization Based on Weighting-Integral Expected Improvement

2012 ◽  
Vol 630 ◽  
pp. 383-388
Author(s):  
Zheng Li ◽  
Xi Cheng Wang

Balancing the global exploration and the local exploitation has received particular attention in global optimization algorithm. In this paper, based on Kriging model an infill sample criteria named weighting-integral expected improvement is proposed, which provides a high flexibility to balance the scope of search. Coupled with this infill sample criteria, a strategy is proposed that on each iteration step, the infill sample point was selected by the urgency of each search scope. Two mathematical functions and one engineering problem are used to test this method. The numerical experiments show that this method has excellent efficiency in finding global optimum solutions.

2013 ◽  
Vol 2013 ◽  
pp. 1-7 ◽  
Author(s):  
Yuelin Gao ◽  
Siqiao Jin

We equivalently transform the sum of linear ratios programming problem into bilinear programming problem, then by using the linear characteristics of convex envelope and concave envelope of double variables product function, linear relaxation programming of the bilinear programming problem is given, which can determine the lower bound of the optimal value of original problem. Therefore, a branch and bound algorithm for solving sum of linear ratios programming problem is put forward, and the convergence of the algorithm is proved. Numerical experiments are reported to show the effectiveness of the proposed algorithm.


2015 ◽  
Vol 2015 ◽  
pp. 1-15 ◽  
Author(s):  
Chang Luo ◽  
Koji Shimoyama ◽  
Shigeru Obayashi

The many-objective optimization performance of the Kriging-surrogate-based evolutionary algorithm (EA), which maximizes expected hypervolume improvement (EHVI) for updating the Kriging model, is investigated and compared with those using expected improvement (EI) and estimation (EST) updating criteria in this paper. Numerical experiments are conducted in 3- to 15-objective DTLZ1-7 problems. In the experiments, an exact hypervolume calculating algorithm is used for the problems with less than six objectives. On the other hand, an approximate hypervolume calculating algorithm based on Monte Carlo sampling is adopted for the problems with more objectives. The results indicate that, in the nonconstrained case, EHVI is a highly competitive updating criterion for the Kriging model and EA based many-objective optimization, especially when the test problem is complex and the number of objectives or design variables is large.


2021 ◽  
Vol 13 (19) ◽  
pp. 10645
Author(s):  
Xiaodong Song ◽  
Mingyang Li ◽  
Zhitao Li ◽  
Fang Liu

Public traffic has a great influence, especially with the background of COVID-19. Solving simulation-based optimization (SO) problem is efficient to study how to improve the performance of public traffic. Global optimization based on Kriging (KGO) is an efficient method for SO; to this end, this paper proposes a Kriging-based global optimization using multi-point infill sampling criterion. This method uses an infill sampling criterion which obtains multiple new design points to update the Kriging model through solving the constructed multi-objective optimization problem in each iteration. Then, the typical low-dimensional and high-dimensional nonlinear functions, and a SO based on 445 bus line in Beijing city, are employed to test the performance of our algorithm. Moreover, compared with the KGO based on the famous single-point expected improvement (EI) criterion and the particle swarm algorithm (PSO), our method can obtain better solutions in the same amount or less time. Therefore, the proposed algorithm expresses better optimization performance, and may be more suitable for solving the tricky and expensive simulation problems in real-world traffic problems.


2018 ◽  
Vol 108 (07-08) ◽  
pp. 499-505
Author(s):  
V. Dejkun ◽  
I. Lorenz ◽  
S. Mischliwski ◽  
E. Abele

Mit einem globalen Optimierungsalgorithmus werden bei der Fräsbearbeitung die Technologieparameter zur Herstellung von Zirkoniumdioxid-Werkstücken mit minimaler Oberflächenrauheit ermittelt. Eine Anwendung ist zur Reduzierung der manuellen Nachbearbeitung von gefrästem Zahnersatz möglich. Zum Einstellen der Algorithmusparameter wurden virtuelle Versuche auf Basis quadratischer Regression durchgeführt. Sie ergaben, dass der Algorithmus „Simulated Annealing“ innerhalb weniger Iterationen optimale Technologieparameter aus der Literatur und aus der Analyse des Lösungsraums auffinden kann.   The focus of this work is on the minimization of surface roughness to reduce postprocessing inthe milling of dental prostheses containing Zirconia. The milling parameters are defined by a global optimization algorithm. The settings for the algorithm parameters determined by previous studies are based on quadratic regression of the solution space and are successful in finding a global optimum close to that found in literature or when analyzing the solution space.


2020 ◽  
Author(s):  
Alberto Bemporad ◽  
Dario Piga

AbstractThis paper proposes a method for solving optimization problems in which the decision-maker cannot evaluate the objective function, but rather can only express a preference such as “this is better than that” between two candidate decision vectors. The algorithm described in this paper aims at reaching the global optimizer by iteratively proposing the decision maker a new comparison to make, based on actively learning a surrogate of the latent (unknown and perhaps unquantifiable) objective function from past sampled decision vectors and pairwise preferences. A radial-basis function surrogate is fit via linear or quadratic programming, satisfying if possible the preferences expressed by the decision maker on existing samples. The surrogate is used to propose a new sample of the decision vector for comparison with the current best candidate based on two possible criteria: minimize a combination of the surrogate and an inverse weighting distance function to balance between exploitation of the surrogate and exploration of the decision space, or maximize a function related to the probability that the new candidate will be preferred. Compared to active preference learning based on Bayesian optimization, we show that our approach is competitive in that, within the same number of comparisons, it usually approaches the global optimum more closely and is computationally lighter. Applications of the proposed algorithm to solve a set of benchmark global optimization problems, for multi-objective optimization, and for optimal tuning of a cost-sensitive neural network classifier for object recognition from images are described in the paper. MATLAB and a Python implementations of the algorithms described in the paper are available at http://cse.lab.imtlucca.it/~bemporad/glis.


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 146
Author(s):  
Aleksei Vakhnin ◽  
Evgenii Sopov

Modern real-valued optimization problems are complex and high-dimensional, and they are known as “large-scale global optimization (LSGO)” problems. Classic evolutionary algorithms (EAs) perform poorly on this class of problems because of the curse of dimensionality. Cooperative Coevolution (CC) is a high-performed framework for performing the decomposition of large-scale problems into smaller and easier subproblems by grouping objective variables. The efficiency of CC strongly depends on the size of groups and the grouping approach. In this study, an improved CC (iCC) approach for solving LSGO problems has been proposed and investigated. iCC changes the number of variables in subcomponents dynamically during the optimization process. The SHADE algorithm is used as a subcomponent optimizer. We have investigated the performance of iCC-SHADE and CC-SHADE on fifteen problems from the LSGO CEC’13 benchmark set provided by the IEEE Congress of Evolutionary Computation. The results of numerical experiments have shown that iCC-SHADE outperforms, on average, CC-SHADE with a fixed number of subcomponents. Also, we have compared iCC-SHADE with some state-of-the-art LSGO metaheuristics. The experimental results have shown that the proposed algorithm is competitive with other efficient metaheuristics.


Mathematics ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 149
Author(s):  
Yaohui Li ◽  
Jingfang Shen ◽  
Ziliang Cai ◽  
Yizhong Wu ◽  
Shuting Wang

The kriging optimization method that can only obtain one sampling point per cycle has encountered a bottleneck in practical engineering applications. How to find a suitable optimization method to generate multiple sampling points at a time while improving the accuracy of convergence and reducing the number of expensive evaluations has been a wide concern. For this reason, a kriging-assisted multi-objective constrained global optimization (KMCGO) method has been proposed. The sample data obtained from the expensive function evaluation is first used to construct or update the kriging model in each cycle. Then, kriging-based estimated target, RMSE (root mean square error), and feasibility probability are used to form three objectives, which are optimized to generate the Pareto frontier set through multi-objective optimization. Finally, the sample data from the Pareto frontier set is further screened to obtain more promising and valuable sampling points. The test results of five benchmark functions, four design problems, and a fuel economy simulation optimization prove the effectiveness of the proposed algorithm.


Sign in / Sign up

Export Citation Format

Share Document