scholarly journals An Improved Biogeography-based Optimization with Hybrid Migration and Feedback Differential Evolution and its Performance Analysis

Author(s):  
Ziyu Zhang ◽  
Yuelin Gao ◽  
Jiahang Li ◽  
Wenlu Zuo

Abstract Biogeography-based optimization (BBO) is not suitable for solving high-dimensional or multi-modal problems. To improve the optimization efficiency of BBO, this study proposes a novel BBO variant, which is named ZGBBO. For the selection operator, an example learning method is designed to ensure inferior solution will not destroy the superior solution. For the migration opeartor, a convex migration is proposed to increase the convergence speed, and the probability of finding the optimal solution is increased by using opposition-based learning to generate opposite individuals. The mutation operator of BBO is deleted to eliminate the generation of poor solutions. A differential evolution with feedback mechanism is merged to improve the convergence accuracy of the algorithm for multi-modal and irregular problems. Meanwhile, the greedy selection is used to make the population always moves in the direction of a better area. Then, the global convergence of ZGBBO is proved with Markov model and sequence convergence model. Quantitative evaluations, compared with three self-variants, seven improved BBO variants and six state-of-the-art evolutionary algorithms, experimental results on 24 benchmark functions show that every improved strategy is indispensable, and the overall performance of ZGBBO is better. Besides, the complexity of ZGBBO is analyzed by comparing with BBO, and ZGBBO has less computation and lower complexity.

Symmetry ◽  
2020 ◽  
Vol 13 (1) ◽  
pp. 63
Author(s):  
Yong Shen ◽  
Ziyuan Liang ◽  
Hongwei Kang ◽  
Xingping Sun ◽  
Qingyi Chen

Proposing new strategies to improve the optimization performance of differential evolution (DE) is an important research study. The jSO algorithm was the announced winner of the Congress on Evolutionary Computation (CEC) 2017 competition on numerical optimization, and is the state-of-the-art algorithm in the SHADE (Success-History based Adaptive Differential Evolution) algorithm series. However, the jSO algorithm converges prematurely in the search space with different dimensions and is prone to falling into local optimum during evolution, as well as the problem of decreasing population diversity. In this paper, a modified jSO algorithm (MjSO) is proposed which is based on cosine similarity with parameter adaptation and a novel opposition-based learning restart mechanism incorporated with symmetry to address the above problems, respectively. Moreover, it is well known that parameter setting has a significant impact on the performance of the algorithm and the search process can be divided into two symmetrical parts. Hence, a parameter control strategy based on a symmetric search process is introduced in the MjSO. The effectiveness of these designs is supported by presenting a population clustering analysis, along with a population diversity measure to evaluate the performance of the proposed algorithm, three state-of-the-art DE variant algorithms (EBLSHADE, ELSHADE-SPACMA, and SALSHADE-cnEPSin) and two original algorithms (jSO and LSHADE) are compared with it, for solving 30 CEC’17 benchmark functions and three classical engineering design problems. The experimental results and analysis reveal that the proposed algorithm can outperform other competitions in terms of the convergence speed and the quality of solutions. Promisingly, the proposed method can be treated as an effective and efficient auxiliary tool for more complex optimization models and scenarios.


2021 ◽  
Vol 2021 ◽  
pp. 1-20
Author(s):  
Changshou Deng ◽  
Xiaogang Dong ◽  
Yucheng Tan ◽  
Hu Peng

Differential evolution (DE) is a robust algorithm of global optimization which has been used for solving many of the real-world applications since it was proposed. However, binomial crossover does not allow for a sufficiently effective search in local space. DE’s local search performance is therefore relatively poor. In particular, DE is applied to solve the complex optimization problem. In this case, inefficiency in local research seriously limits its overall performance. To overcome this disadvantage, this paper introduces a new local search scheme based on Hadamard matrix (HLS). The HLS improves the probability of finding the optimal solution through producing multiple offspring in the local space built by the target individual and its descendants. The HLS has been implemented in four classical DE algorithms and jDE, a variant of DE. The experiments are carried out on a set of widely used benchmark functions. For 20 benchmark problems, the four DE schemes using HLS have better results than the corresponding DE schemes, accounting for 80%, 75%, 65%, and 65% respectively. Also, the performance of jDE with HLS is better than that of jDE on 50% test problems. The experimental results and statistical analysis have revealed that HLS could effectively improve the overall performance of DE and jDE.


Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 567
Author(s):  
Donghun Yang ◽  
Kien Mai Mai Ngoc ◽  
Iksoo Shin ◽  
Kyong-Ha Lee ◽  
Myunggwon Hwang

To design an efficient deep learning model that can be used in the real-world, it is important to detect out-of-distribution (OOD) data well. Various studies have been conducted to solve the OOD problem. The current state-of-the-art approach uses a confidence score based on the Mahalanobis distance in a feature space. Although it outperformed the previous approaches, the results were sensitive to the quality of the trained model and the dataset complexity. Herein, we propose a novel OOD detection method that can train more efficient feature space for OOD detection. The proposed method uses an ensemble of the features trained using the softmax-based classifier and the network based on distance metric learning (DML). Through the complementary interaction of these two networks, the trained feature space has a more clumped distribution and can fit well on the Gaussian distribution by class. Therefore, OOD data can be efficiently detected by setting a threshold in the trained feature space. To evaluate the proposed method, we applied our method to various combinations of image datasets. The results show that the overall performance of the proposed approach is superior to those of other methods, including the state-of-the-art approach, on any combination of datasets.


2006 ◽  
Vol 14 (2) ◽  
pp. 223-253 ◽  
Author(s):  
Frédéric Lardeux ◽  
Frédéric Saubion ◽  
Jin-Kao Hao

This paper presents GASAT, a hybrid algorithm for the satisfiability problem (SAT). The main feature of GASAT is that it includes a recombination stage based on a specific crossover and a tabu search stage. We have conducted experiments to evaluate the different components of GASAT and to compare its overall performance with state-of-the-art SAT algorithms. These experiments show that GASAT provides very competitive results.


2021 ◽  
Vol 1 (2) ◽  
pp. 1-23
Author(s):  
Arkadiy Dushatskiy ◽  
Tanja Alderliesten ◽  
Peter A. N. Bosman

Surrogate-assisted evolutionary algorithms have the potential to be of high value for real-world optimization problems when fitness evaluations are expensive, limiting the number of evaluations that can be performed. In this article, we consider the domain of pseudo-Boolean functions in a black-box setting. Moreover, instead of using a surrogate model as an approximation of a fitness function, we propose to precisely learn the coefficients of the Walsh decomposition of a fitness function and use the Walsh decomposition as a surrogate. If the coefficients are learned correctly, then the Walsh decomposition values perfectly match with the fitness function, and, thus, the optimal solution to the problem can be found by optimizing the surrogate without any additional evaluations of the original fitness function. It is known that the Walsh coefficients can be efficiently learned for pseudo-Boolean functions with k -bounded epistasis and known problem structure. We propose to learn dependencies between variables first and, therefore, substantially reduce the number of Walsh coefficients to be calculated. After the accurate Walsh decomposition is obtained, the surrogate model is optimized using GOMEA, which is considered to be a state-of-the-art binary optimization algorithm. We compare the proposed approach with standard GOMEA and two other Walsh decomposition-based algorithms. The benchmark functions in the experiments are well-known trap functions, NK-landscapes, MaxCut, and MAX3SAT problems. The experimental results demonstrate that the proposed approach is scalable at the supposed complexity of O (ℓ log ℓ) function evaluations when the number of subfunctions is O (ℓ) and all subfunctions are k -bounded, outperforming all considered algorithms.


2013 ◽  
Vol 756-759 ◽  
pp. 3231-3235
Author(s):  
Xue Mei Wang ◽  
Jin Bo Wang

According to the defects of classical k-means clustering algorithm such as sensitive to the initial clustering center selection, the poor global search ability, falling into the local optimal solution. A differential evolution algorithm which was a kind of a heuristic global optimization algorithm based on population was introduced in this article, then put forward an improved differential evolution algorithm combined with k-means clustering algorithm at the same time. The experiments showed that the method has solved initial centers optimization problem of k-means clustering algorithm well, had a better searching ability,and more effectively improved clustering quality and convergence speed.


2018 ◽  
Vol 8 (10) ◽  
pp. 1945 ◽  
Author(s):  
Tarik Eltaeib ◽  
Ausif Mahmood

Differential evolution (DE) has been extensively used in optimization studies since its development in 1995 because of its reputation as an effective global optimizer. DE is a population-based metaheuristic technique that develops numerical vectors to solve optimization problems. DE strategies have a significant impact on DE performance and play a vital role in achieving stochastic global optimization. However, DE is highly dependent on the control parameters involved. In practice, the fine-tuning of these parameters is not always easy. Here, we discuss the improvements and developments that have been made to DE algorithms. In particular, we present a state-of-the-art survey of the literature on DE and its recent advances, such as the development of adaptive, self-adaptive and hybrid techniques.


2022 ◽  
Vol 40 (2) ◽  
pp. 1-24
Author(s):  
Franco Maria Nardini ◽  
Roberto Trani ◽  
Rossano Venturini

Modern search services often provide multiple options to rank the search results, e.g., sort “by relevance”, “by price” or “by discount” in e-commerce. While the traditional rank by relevance effectively places the relevant results in the top positions of the results list, the rank by attribute could place many marginally relevant results in the head of the results list leading to poor user experience. In the past, this issue has been addressed by investigating the relevance-aware filtering problem, which asks to select the subset of results maximizing the relevance of the attribute-sorted list. Recently, an exact algorithm has been proposed to solve this problem optimally. However, the high computational cost of the algorithm makes it impractical for the Web search scenario, which is characterized by huge lists of results and strict time constraints. For this reason, the problem is often solved using efficient yet inaccurate heuristic algorithms. In this article, we first prove the performance bounds of the existing heuristics. We then propose two efficient and effective algorithms to solve the relevance-aware filtering problem. First, we propose OPT-Filtering, a novel exact algorithm that is faster than the existing state-of-the-art optimal algorithm. Second, we propose an approximate and even more efficient algorithm, ϵ-Filtering, which, given an allowed approximation error ϵ, finds a (1-ϵ)–optimal filtering, i.e., the relevance of its solution is at least (1-ϵ) times the optimum. We conduct a comprehensive evaluation of the two proposed algorithms against state-of-the-art competitors on two real-world public datasets. Experimental results show that OPT-Filtering achieves a significant speedup of up to two orders of magnitude with respect to the existing optimal solution, while ϵ-Filtering further improves this result by trading effectiveness for efficiency. In particular, experiments show that ϵ-Filtering can achieve quasi-optimal solutions while being faster than all state-of-the-art competitors in most of the tested configurations.


Sign in / Sign up

Export Citation Format

Share Document