scholarly journals Enhancing hierarchical surrogate-assisted evolutionary algorithm for high-dimensional expensive optimization via random projection

Author(s):  
Xiaodong Ren ◽  
Daofu Guo ◽  
Zhigang Ren ◽  
Yongsheng Liang ◽  
An Chen

AbstractBy remarkably reducing real fitness evaluations, surrogate-assisted evolutionary algorithms (SAEAs), especially hierarchical SAEAs, have been shown to be effective in solving computationally expensive optimization problems. The success of hierarchical SAEAs mainly profits from the potential benefit of their global surrogate models known as “blessing of uncertainty” and the high accuracy of local models. However, their performance leaves room for improvement on high-dimensional problems since now it is still challenging to build accurate enough local models due to the huge solution space. Directing against this issue, this study proposes a new hierarchical SAEA by training local surrogate models with the help of the random projection technique. Instead of executing training in the original high-dimensional solution space, the new algorithm first randomly projects training samples onto a set of low-dimensional subspaces, then trains a surrogate model in each subspace, and finally achieves evaluations of candidate solutions by averaging the resulting models. Experimental results on seven benchmark functions of 100 and 200 dimensions demonstrate that random projection can significantly improve the accuracy of local surrogate models and the new proposed hierarchical SAEA possesses an obvious edge over state-of-the-art SAEAs.

2021 ◽  
Author(s):  
Takumi Sonoda ◽  
Masaya Nakata

Surrogate-assisted multi-objective evolutionary algorithms have advanced the field of computationally expensive optimization, but their progress is often restricted to low-dimensional problems. This manuscript presents a multiple classifiers-assisted evolutionary algorithm based on decomposition, which is adapted for high-dimensional expensive problems in terms of the following two insights. Compared to approximation-based surrogates, the accuracy of classification-based surrogates is robust for few high-dimensional training samples. Further, multiple local classifiers can hedge the risk of over-fitting issues. Accordingly, the proposed algorithm builds multiple classifiers with support vector machines on a decomposition-based multi-objective algorithm, wherein each local classifier is trained for a corresponding scalarization function. Experimental results statistically confirm that the proposed algorithm is competitive to the state-of-the-art algorithms and computationally efficient as well.


2021 ◽  
Author(s):  
Takumi Sonoda ◽  
Masaya Nakata

Surrogate-assisted multi-objective evolutionary algorithms have advanced the field of computationally expensive optimization, but their progress is often restricted to low-dimensional problems. This manuscript presents a multiple classifiers-assisted evolutionary algorithm based on decomposition, which is adapted for high-dimensional expensive problems in terms of the following two insights. Compared to approximation-based surrogates, the accuracy of classification-based surrogates is robust for few high-dimensional training samples. Further, multiple local classifiers can hedge the risk of over-fitting issues. Accordingly, the proposed algorithm builds multiple classifiers with support vector machines on a decomposition-based multi-objective algorithm, wherein each local classifier is trained for a corresponding scalarization function. Experimental results statistically confirm that the proposed algorithm is competitive to the state-of-the-art algorithms and computationally efficient as well.


Author(s):  
Ken Kobayashi ◽  
Naoki Hamada ◽  
Akiyoshi Sannai ◽  
Akinori Tanaka ◽  
Kenichi Bannai ◽  
...  

Multi-objective optimization problems require simultaneously optimizing two or more objective functions. Many studies have reported that the solution set of an M-objective optimization problem often forms an (M − 1)-dimensional topological simplex (a curved line for M = 2, a curved triangle for M = 3, a curved tetrahedron for M = 4, etc.). Since the dimensionality of the solution set increases as the number of objectives grows, an exponentially large sample size is needed to cover the solution set. To reduce the required sample size, this paper proposes a Bézier simplex model and its fitting algorithm. These techniques can exploit the simplex structure of the solution set and decompose a high-dimensional surface fitting task into a sequence of low-dimensional ones. An approximation theorem of Bézier simplices is proven. Numerical experiments with synthetic and real-world optimization problems demonstrate that the proposed method achieves an accurate approximation of high-dimensional solution sets with small samples. In practice, such an approximation will be conducted in the postoptimization process and enable a better trade-off analysis.


Processes ◽  
2019 ◽  
Vol 7 (6) ◽  
pp. 321 ◽  
Author(s):  
Huazhen Cao ◽  
Tao Yu ◽  
Xiaoshun Zhang ◽  
Bo Yang ◽  
Yaxiong Wu

A novel transfer bees optimizer for reactive power optimization in a high-power system was developed in this paper. Q-learning was adopted to construct the learning mode of bees, improving the intelligence of bees through task division and cooperation. Behavior transfer was introduced, and prior knowledge of the source task was used to process the new task according to its similarity to the source task, so as to accelerate the convergence of the transfer bees optimizer. Moreover, the solution space was decomposed into multiple low-dimensional solution spaces via associated state-action chains. The transfer bees optimizer performance of reactive power optimization was assessed, while simulation results showed that the convergence of the proposed algorithm was more stable and faster, and the algorithm was about 4 to 68 times faster than the traditional artificial intelligence algorithms.


2022 ◽  
Vol 11 (1) ◽  
pp. 55-72 ◽  
Author(s):  
Anima Naik ◽  
Pradeep Kumar Chokkalingam

In this paper, we propose the binary version of the Social Group Optimization (BSGO) algorithm for solving the 0-1 knapsack problem. The standard Social Group Optimization (SGO) is used for continuous optimization problems. So a transformation function is used to convert the continuous values generated from SGO into binary ones. The experiments are carried out using both low-dimensional and high-dimensional knapsack problems. The results obtained by the BSGO algorithm are compared with other binary optimization algorithms. Experimental results reveal the superiority of the BSGO algorithm in achieving a high quality of solutions over different algorithms and prove that it is one of the best finding algorithms especially in high-dimensional cases.


Author(s):  
Stefan Erschen ◽  
Fabian Duddeck ◽  
Matthias Gerdts ◽  
Markus Zimmermann

In the early development phase of complex technical systems, uncertainties caused by unknown design restrictions must be considered. In order to avoid premature design decisions, sets of good designs, i.e., designs which satisfy all design goals, are sought rather than one optimal design that may later turn out to be infeasible. A set of good designs is called a solution space and serves as target region for design variables, including those that quantify properties of components or subsystems. Often, the solution space is approximated, e.g., to enable independent development work. Algorithms that approximate the solution space as high-dimensional boxes are available, in which edges represent permissible intervals for single design variables. The box size is maximized to provide large target regions and facilitate design work. As a result of geometrical mismatch, however, boxes typically capture only a small portion of the complete solution space. To reduce this loss of solution space while still enabling independent development work, this paper presents a new approach that optimizes a set of permissible two-dimensional (2D) regions for pairs of design variables, so-called 2D-spaces. Each 2D-space is confined by polygons. The Cartesian product of all 2D-spaces forms a solution space for all design variables. An optimization problem is formulated that maximizes the size of the solution space, and is solved using an interior-point algorithm. The approach is applicable to arbitrary systems with performance measures that can be expressed or approximated as linear functions of their design variables. Its effectiveness is demonstrated in a chassis design problem.


Sign in / Sign up

Export Citation Format

Share Document