scholarly journals Chaotic Fitness Dependent Optimizer for Planning and Engineering Design

Author(s):  
Hardi M. Mohammed ◽  
Tarik A. Rashid

Abstract Fitness Dependent Optimizer (FDO) is a recent metaheuristic algorithm that mimics the reproduction behavior of the bee swarm in finding better hives. This algorithm is similar to Particle Swarm Optimization (PSO) but it works differently. The algorithm is very powerful and has better results compared to other common metaheuristic algorithms. This paper aims at improving the performance of FDO, thus, the chaotic theory is used inside FDO to propose Chaotic FDO (CFDO). Ten chaotic maps are used in the CFDO to consider which of them are performing well to avoid local optima and finding global optima. New technic is used to conduct population in specific limitation since FDO technic has a problem to amend population. The proposed CFDO is evaluated by using 10 benchmark functions from CEC2019. Finally, the results show that the ability of CFDO is improved. Singer map has a great impact on improving CFDO while the Tent map is the worst. Results show that CFDO is superior to GA, FDO, and CSO. Both CEC2013 and CEC2005 are used to evaluate CFDO. Finally, the proposed CFDO is applied to classical engineering problems, such as pressure vessel design and the result shows that CFDO can handle the problem better than WOA, GWO, FDO, and CGWO. Besides, CFDO is applied to solve the task assignment problem and then compared to the original FDO. The results prove that CFDO has better capability to solve the problem.

2010 ◽  
Vol 26-28 ◽  
pp. 1151-1154
Author(s):  
Zong Li Liu ◽  
Jie Cao ◽  
Zhan Ting Yuan

The optimization of complex systems, such as production scheduling systems and control systems, often encounters some difficulties, such as large-scale, hard to model, time consuming to evaluate, NP-hard, multi-modal, uncertain and multi-objective, etc. It is always a hot research topic in academic and engineering fields to propose advanced theory and effective algorithms. As a novel evolutionary computing technique, particle swarm optimization (PSO) is characterized by not being limited by the representation of the optimization problems, and by global optimization ability, which has gained wide attentation and research from both academic and industry fields. The task assignment problem in the enterprise with directed graph model is presented. Task assignment problem with buffer zone is solved via a hybrid PSO algorithm. Simulation result shows that the model and the algorithm are effective to the problem.


2012 ◽  
Vol 605-607 ◽  
pp. 2217-2221
Author(s):  
Rong Hua ◽  
Dan Jiang Chen ◽  
Yin Zhong Ye

Chaos particle swarm optimization (CPSO) can not guarantee the population multiplicity and the optimized ergodicity, because its algorithm parameters are still random numbers in form. This paper proposes a new adaptive chaos embedded particle swarm optimization (ACEPSO) algorithm that uses chaotic maps to substitute random numbers of the classical PSO algorithm so as to make use of the properties of stochastic and ergodicity in chaotic search and introduces an adaptive inertia weight factor for each particle to adjust its inertia weight factor adaptively in response to its fitness, which can overcome the drawbacks of CPSO algorithm that is easily trapped in local optima. The experiments with complex and Multi-dimensional functions demonstrate that ACEPSO outperforms the original CPSO in the global searching ability and convergence rate.


2021 ◽  
Author(s):  
Xuemei Li ◽  
Shaojun Li

Abstract To solve engineering problems with evolutionary algorithms, many expensive objective function evaluations (FEs) are required. To alleviate this difficulty, the surrogate-assisted evolutionary algorithm (SAEA) has attracted increasingly more attention in both academia and industry. The existing SAEAs depend on the quantity and quality of the original samples, and it is difficult for them to yield satisfactory solutions within the limited number of FEs. Moreover, these methods easily fall into local optima as the dimension increases. To address these problems, this paper proposes an adaptive surrogate-assisted particle swarm optimization (ASAPSO) algorithm. In the proposed algorithm, an adaptive surrogate selection method that depends on the comparison between the best existing solution and the latest obtained solution is suggested to ensure the effectiveness of the optimization operations and improve the computational efficiency. Additionally, a model output criterion based on the standard deviation is suggested to improve the robustness and stability of the ensemble model. To verify the performance of the proposed algorithm, 10 benchmark functions with different modalities from 10 to 50 dimensions are tested, and the results are compared with those of five state-of-the-art SAEAs. The experimental results indicate that the proposed algorithm performs well for most benchmark functions within the limited number of FEs. The performance of the proposed algorithm in solving engineering problems is verified by applying the algorithm to the PX oxidation process.


Author(s):  
Humberto Martins Mendonça Duarte ◽  
Rafael Lima de Carvalho

Particle swarm optimization (PSO) is a well-known metaheuristic, whose performance for solving global optimization problems has been thoroughly explored. It has been established that without proper manipulation of the inertia weight parameter, the search for a global optima may fail. In order to handle this problem, we investigate the experimental performance of a PSO-based metaheuristic known as HPSO-SSM, which uses a logistic map sequence to control the inertia weight to enhance the diversity in the search process, and a spiral-shaped mechanism as a local search operator, as well as two dynamic correction factors to the position formula. Thus, we present an application of this variant for solving high-dimensional optimization problems, and evaluate its effectiveness against 24 benchmark functions. A comparison between both methods showed that the proposed variant can escape from local optima, and demonstrates faster convergence for almost every evaluated function.


2017 ◽  
Vol 58 ◽  
pp. 115-127 ◽  
Author(s):  
Chyh-Ming Lai ◽  
Wei-Chang Yeh ◽  
Yen-Cheng Huang

Sign in / Sign up

Export Citation Format

Share Document