scholarly journals Gradual and Cumulative Improvements to the Classical Differential Evolution Scheme through Experiments

Author(s):  
George Anescu

Abstract The paper presents the experimental results of some tests conducted with the purpose to gradually and cumulatively improve the classical DE scheme in both efficiency and success rate. The modifications consisted in the randomization of the scaling factor (a simple jitter scheme), a more efficient Random Greedy Selection scheme, an adaptive scheme for the crossover probability and a resetting mechanism for the agents. After each modification step, experiments have been conducted on a set of 11 scalable, multimodal, continuous optimization functions in order to analyze the improvements and decide the new improvement direction. Finally, only the initial classical scheme and the constructed Fast Self-Adaptive DE (FSA-DE) variant were compared with the purpose of testing their performance degradation with the increase of the search space dimension. The experimental results demonstrated the superiority of the proposed FSA-DE variant.

Author(s):  
George Anescu

Multidimensional scalable test functions are very important in testing the capabilities of new optimization methods, especially in evaluating their response to the increase of the search space dimension. As a continuation of a previous published paper, new sets of test functions for continuous optimization are proposed, both unconstrained (or only box constrained, 7 new test functions) and constrained (10 new test functions).


2017 ◽  
Vol 2017 ◽  
pp. 1-12 ◽  
Author(s):  
Lei Peng ◽  
Yanyun Zhang ◽  
Guangming Dai ◽  
Maocai Wang

Memetic algorithms with an appropriate trade-off between the exploration and exploitation can obtain very good results in continuous optimization. In this paper, we present an improved memetic differential evolution algorithm for solving global optimization problems. The proposed approach, called memetic DE (MDE), hybridizes differential evolution (DE) with a local search (LS) operator and periodic reinitialization to balance the exploration and exploitation. A new contraction criterion, which is based on the improved maximum distance in objective space, is proposed to decide when the local search starts. The proposed algorithm is compared with six well-known evolutionary algorithms on twenty-one benchmark functions, and the experimental results are analyzed with two kinds of nonparametric statistical tests. Moreover, sensitivity analyses for parameters in MDE are also made. Experimental results have demonstrated the competitive performance of the proposed method with respect to the six compared algorithms.


2010 ◽  
Vol 15 (4) ◽  
pp. 803-830 ◽  
Author(s):  
Morteza Alinia Ahandani ◽  
Naser Pourqorban Shirjoposh ◽  
Reza Banimahd

2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
V. Gonuguntla ◽  
R. Mallipeddi ◽  
Kalyana C. Veluvolu

Differential evolution (DE) is simple and effective in solving numerous real-world global optimization problems. However, its effectiveness critically depends on the appropriate setting of population size and strategy parameters. Therefore, to obtain optimal performance the time-consuming preliminary tuning of parameters is needed. Recently, different strategy parameter adaptation techniques, which can automatically update the parameters to appropriate values to suit the characteristics of optimization problems, have been proposed. However, most of the works do not control the adaptation of the population size. In addition, they try to adapt each strategy parameters individually but do not take into account the interaction between the parameters that are being adapted. In this paper, we introduce a DE algorithm where both strategy parameters are self-adapted taking into account the parameter dependencies by means of a multivariate probabilistic technique based on Gaussian Adaptation working on the parameter space. In addition, the proposed DE algorithm starts by sampling a huge number of sample solutions in the search space and in each generation a constant number of individuals from huge sample set are adaptively selected to form the population that evolves. The proposed algorithm is evaluated on 14 benchmark problems of CEC 2005 with different dimensionality.


2022 ◽  
Vol 13 (1) ◽  
pp. 0-0

Differential evolution (DE), an important evolutionary technique, enhances its parameters such as, initialization of population, mutation, crossover etc. to resolve realistic optimization issues. This work represents a modified differential evolution algorithm by using the idea of exponential scale factor and logistic map in order to address the slow convergence rate, and to keep a very good equilibrium linking exploration and exploitation. Modification is done in two ways: (i) Initialization of population and (ii) Scaling factor.The proposed algorithm is validated with the aid of a 13 different benchmark functions taking from the literature, also the outcomes are compared along with 7 different popular state of art algorithms. Further, performance of the modified algorithm is simulated on 3 realistic engineering problems. Also compared with 8 recent optimizer techniques. Again from number of function evaluations it is clear that the proposed algorithm converses more quickly than the other existing algorithms.


2013 ◽  
Vol 2013 ◽  
pp. 1-7 ◽  
Author(s):  
Hongtao Ye ◽  
Wenguang Luo ◽  
Zhenqiang Li

This paper presents an analysis of the relationship of particle velocity and convergence of the particle swarm optimization. Its premature convergence is due to the decrease of particle velocity in search space that leads to a total implosion and ultimately fitness stagnation of the swarm. An improved algorithm which introduces a velocity differential evolution (DE) strategy for the hierarchical particle swarm optimization (H-PSO) is proposed to improve its performance. The DE is employed to regulate the particle velocity rather than the traditional particle position in case that the optimal result has not improved after several iterations. The benchmark functions will be illustrated to demonstrate the effectiveness of the proposed method.


2015 ◽  
Vol 137 (7) ◽  
Author(s):  
Jong-Chen Chen

Continuous optimization plays an increasingly significant role in everyday decision-making situations. Our group had previously developed a multilevel system called the artificial neuromolecular system (ANM) that possessed structure richness allowing variation and/or selection operators to act on it in order to generate a broad range of dynamic behaviors. In this paper, we used the ANM system to control the motions of a wooden walking robot named Miky. The robot was used to investigate the ANM system's capability to deal with continuous optimization problems through self-organized learning. Evolutionary learning algorithm was used to train the system and generate appropriate control. The experimental results showed that Miky was capable of learning in a continued manner in a physical environment. A further experiment was conducted by making some changes to Miky's physical structure in order to observe the system's capability to deal with the change. Detailed analysis of the experimental results showed that Miky responded to the change by appropriately adjusting its leg movements in space and time. The results showed that the ANM system possessed continuous optimization capability in coping with the change. Our findings from the empirical experiments might provide us another dimension of information of how to design an intelligent system comparatively friendlier than the traditional systems in assisting humans to walk.


2015 ◽  
Vol 3 (4) ◽  
pp. 365-373 ◽  
Author(s):  
Dabin Zhang ◽  
Jia Ye ◽  
Zhigang Zhou ◽  
Yuqi Luan

Abstract In order to overcome the problem of low convergence precision and easily relapsing into local extremum in fruit fly optimization algorithm (FOA), this paper adds the idea of differential evolution to fruit fly optimization algorithm so as to optimizing and a algorithm of fruit fly optimization based on differential evolution is proposed (FOADE). Adding the operating of mutation, crossover and selection of differential evolution to FOA after each iteration, which can jump out local extremum and continue to optimize. Compared to FOA, the experimental results show that FOADE has the advantages of better global searching ability, faster convergence and more precise convergence.


Sign in / Sign up

Export Citation Format

Share Document