benchmark test functions
Recently Published Documents


TOTAL DOCUMENTS

62
(FIVE YEARS 40)

H-INDEX

7
(FIVE YEARS 3)

2022 ◽  
pp. 1-10
Author(s):  
Zhi Wang ◽  
Shufang Song ◽  
Hongkui Wei

When solving multi-objective optimization problems, an important issue is how to promote convergence and distribution simultaneously. To address the above issue, a novel optimization algorithm, named as multi-objective modified teaching-learning-based optimization (MOMTLBO), is proposed. Firstly, a grouping teaching strategy based on pareto dominance relationship is proposed to strengthen the convergence efficiency. Afterward, a diversified learning strategy is presented to enhance the distribution. Meanwhile, differential operations are incorporated to the proposed algorithm. By the above process, the search ability of the algorithm can be encouraged. Additionally, a set of well-known benchmark test functions including ten complex problems proposed for CEC2009 is used to verify the performance of the proposed algorithm. The results show that MOMTLBO exhibits competitive performance against other comparison algorithms. Finally, the proposed algorithm is applied to the aerodynamic optimization of airfoils.


2022 ◽  
Vol 20 ◽  
pp. 736-744
Author(s):  
Olawale J. Adeleke ◽  
Idowu A. Osinuga ◽  
Raufu A. Raji

In this paper, a new conjugate gradient (CG) parameter is proposed through the convex combination of the Fletcher-Reeves (FR) and Polak-Ribiére-Polyak (PRP) CG update parameters such that the conjugacy condition of Dai-Liao is satisfied. The computational efficiency of the PRP method and the convergence profile of the FR method motivated the choice of these two CG methods. The corresponding CG algorithm satisfies the sufficient descent property and was shown to be globally convergent under the strong Wolfe line search procedure. Numerical tests on selected benchmark test functions show that the algorithm is efficient and very competitive in comparison with some existing classical methods.


2022 ◽  
Author(s):  
Chnoor M. Rahman ◽  
Tarik A. Rashid ◽  
Abeer Alsadoon ◽  
Nebojsa Bacanin ◽  
Polla Fattah ◽  
...  

<p></p><p></p><p>The dragonfly algorithm developed in 2016. It is one of the algorithms used by the researchers to optimize an extensive series of uses and applications in various areas. At times, it offers superior performance compared to the most well-known optimization techniques. However, this algorithm faces several difficulties when it is utilized to enhance complex optimization problems. This work addressed the robustness of the method to solve real-world optimization issues, and its deficiency to improve complex optimization problems. This review paper shows a comprehensive investigation of the dragonfly algorithm in the engineering area. First, an overview of the algorithm is discussed. Besides, we also examined the modifications of the algorithm. The merged forms of this algorithm with different techniques and the modifications that have been done to make the algorithm perform better are addressed. Additionally, a survey on applications in the engineering area that used the dragonfly algorithm is offered. The utilized engineering applications are the applications in the field of mechanical engineering problems, electrical engineering problems, optimal parameters, economic load dispatch, and loss reduction. The algorithm is tested and evaluated against particle swarm optimization algorithm and firefly algorithm. To evaluate the ability of the dragonfly algorithm and other participated algorithms a set of traditional benchmarks (TF1-TF23) were utilized. Moreover, to examine the ability of the algorithm to optimize large scale optimization problems CEC-C2019 benchmarks were utilized. A comparison is made between the algorithm and other metaheuristic techniques to show its ability to enhance various problems. The outcomes of the algorithm from the works that utilized the dragonfly algorithm previously and the outcomes of the benchmark test functions proved that in comparison with participated algorithms (GWO, PSO, and GA), the dragonfly algorithm owns an excellent performance, especially for small to intermediate applications. Moreover, the congestion facts of the technique and some future works are presented. The authors conducted this research to help other researchers who want to study the algorithm and utilize it to optimize engineering problems.</p><p></p><p></p>


2022 ◽  
Vol 13 (1) ◽  
pp. 0-0

Clustering of data is one of the necessary data mining techniques, where similar objects are grouped in the same cluster. In recent years, many nature-inspired based clustering techniques have been proposed, which have led to some encouraging results. This paper proposes a Modified Cuckoo Search (MoCS) algorithm. In this proposed work, an attempt has been made to balance the exploration of the Cuckoo Search (CS) algorithm and to increase the potential of the exploration to avoid premature convergence. This algorithm is tested using fifteen benchmark test functions and is proved as an efficient algorithm in comparison to the CS algorithm. Further, this method is compared with well-known nature-inspired algorithms such as Ant Colony Optimization (ACO), Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO), Particle Swarm Optimization with Age Group topology (PSOAG) and CS algorithm for clustering of data using six real datasets. The experimental results indicate that the MoCS algorithm achieves better results as compared to other algorithms in finding optimal cluster centers.


2021 ◽  
pp. 1-27
Author(s):  
Xinliang Xu ◽  
Fu Yan

Autonomous groups of particles swarm optimization (AGPSO), inspired by individual diversity in biological swarms such as insects or birds, is a modified particle swarm optimization (PSO) variant. The AGPSO method is simple to understand and easy to implement on a computer. It has achieved an impressive performance on high-dimensional optimization tasks. However, AGPSO also struggles with premature convergence, low solution accuracy and easily falls into local optimum solutions. To overcome these drawbacks, random-walk autonomous group particle swarm optimization (RW-AGPSO) is proposed. In the RW-AGPSO algorithm, Levy flights and dynamically changing weight strategies are introduced to balance exploration and exploitation. The search accuracy and optimization performance of the RW-AGPSO algorithm are verified on 23 well-known benchmark test functions. The experimental results reveal that, for almost all low- and high-dimensional unimodal and multimodal functions, the RW-AGPSO technique has superior optimization performance when compared with three AGPSO variants, four PSO approaches and other recently proposed algorithms. In addition, the performance of the RW-AGPSO has also been tested on the CEC’14 test suite and three real-world engineering problems. The results show that the RW-AGPSO is effective for solving high complexity problems.


2021 ◽  
Author(s):  
Saubhagya Ranjan Biswal ◽  
GAURI SHANKAR

Abstract Increasing trend in load demand has introduced many problems in distribution systems like more line losses, low power factor, voltage fluctuations and so on. These issues have become a vital challenge for the power utilities to resolve and maintain the system under healthy conditions. For handling these issues, optimal capacitor placement (OCP) in radial distribution systems employing an optimization approach is explored in this work. The present work proposes a novel application of quasi-opposition based sine cosine algorithm for solving OCP problem. The effectiveness and superiority of the proposed algorithm is verified over other algorithms using different standard benchmark test functions. For solving OCP problem, at first, the most deserving candidate buses for the OCP are identified using a new proposed sensitivity index that helps in reducing search space for the optimization process. Thereafter, by minimizing the losses and maximizing the net annual profit of the system, the optimal location and selection of the fixed-step capacitor banks are obtained. The efficacy of the proposed algorithm has been verified by comparing the results obtained with that of other state-of-the-art algorithms on the standard IEEE 85 bus and 118 bus radial distribution test systems considering full load and variable load scenarios.


Author(s):  
Rizk M. Rizk-Allah ◽  
O. Saleh ◽  
Enas A. Hagag ◽  
Abd Allah A. Mousa

AbstractNowadays optimization problems become difficult and complex, traditional methods become inefficient to reach global optimal solutions. Meanwhile, a huge number of meta-heuristic algorithms have been suggested to overcome the shortcomings of traditional methods. Tunicate Swarm Algorithm (TSA) is a new biologically inspired meta-heuristic optimization algorithm which mimics jet propulsion and swarm intelligence during the searching for a food source. In this paper, we suggested an enhancement to TSA, named Enhanced Tunicate Swarm Algorithm (ETSA), based on a novel searching strategy to improve the exploration and exploitation abilities. The proposed ETSA is applied to 20 unimodal, multimodal and fixed dimensional benchmark test functions and compared with other algorithms. The statistical measures, error analysis and the Wilcoxon test have affirmed the robustness and effectiveness of the ETSA. Furthermore, the scalability of the ETSA is confirmed using high dimensions and results exhibited that the ETSA is least affected by increasing the dimensions. Additionally, the CPU time of the proposed algorithms are obtained, the ETSA provides less CPU time than the others for most functions. Finally, the proposed algorithm is applied at one of the important electrical applications, Economic Dispatch Problem, and the results affirmed its applicability to deal with practical optimization tasks.


Processes ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 1418
Author(s):  
Olympia Roeva ◽  
Dafina Zoteva ◽  
Velislava Lyubenova

In this paper, the artificial bee colony (ABC) algorithm is hybridized with the genetic algorithm (GA) for a model parameter identification problem. When dealing with real-world and large-scale problems, it becomes evident that concentrating on a sole metaheuristic algorithm is somewhat restrictive. A skilled combination between metaheuristics or other optimization techniques, a so-called hybrid metaheuristic, can provide more efficient behavior and greater flexibility. Hybrid metaheuristics combine the advantages of one algorithm with the strengths of another. ABC, based on the foraging behavior of honey bees, and GA, based on the mechanics of nature selection, are among the most efficient biologically inspired population-based algorithms. The performance of the proposed ABC-GA hybrid algorithm is examined, including classic benchmark test functions. To demonstrate the effectiveness of ABC-GA for a real-world problem, parameter identification of an Escherichia coli MC4110 fed-batch cultivation process model is considered. The computational results of the designed algorithm are compared to the results of different hybridized biologically inspired techniques (ant colony optimization (ACO) and firefly algorithm (FA))—hybrid algorithms as ACO-GA, GA-ACO and ACO-FA. The algorithms are applied to the same problems—a set of benchmark test functions and the real nonlinear optimization problem. Taking into account the overall searchability and computational efficiency, the results clearly show that the proposed ABC–GA algorithm outperforms the considered hybrid algorithms.


Algorithms ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 239
Author(s):  
Zhenyu Song ◽  
Xuemei Yan ◽  
Lvxing Zhao ◽  
Luyi Fan ◽  
Cheng Tang ◽  
...  

Brain-storm optimization (BSO), which is a population-based optimization algorithm, exhibits a poor search performance, premature convergence, and a high probability of falling into local optima. To address these problems, we developed the adaptive mechanism-based BSO (ABSO) algorithm based on the chaotic local search in this study. The adjustment of the search space using the local search method based on an adaptive self-scaling mechanism balances the global search and local development performance of the ABSO algorithm, effectively preventing the algorithm from falling into local optima and improving its convergence accuracy. To verify the stability and effectiveness of the proposed ABSO algorithm, the performance was tested using 29 benchmark test functions, and the mean and standard deviation were compared with those of five other optimization algorithms. The results showed that ABSO outperforms the other algorithms in terms of stability and convergence accuracy. In addition, the performance of ABSO was further verified through a nonparametric statistical test.


Crystals ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 916
Author(s):  
Dili Shen ◽  
Wuyi Ming ◽  
Xinggui Ren ◽  
Zhuobin Xie ◽  
Yong Zhang ◽  
...  

Lévy flights random walk is one of key parts in the cuckoo search (CS) algorithm to update individuals. The standard CS algorithm adopts the constant scale factor for this random walk. This paper proposed an improved beta distribution cuckoo search (IBCS) for this factor in the CS algorithm. In terms of local characteristics, the proposed algorithm makes the scale factor of the step size in Lévy flights showing beta distribution in the evolutionary process. In terms of the overall situation, the scale factor shows the exponential decay trend in the process. The proposed algorithm makes full use of the advantages of the two improvement strategies. The test results show that the proposed strategy is better than the standard CS algorithm or others improved by a single improvement strategy, such as improved CS (ICS) and beta distribution CS (BCS). For the six benchmark test functions of 30 dimensions, the average rankings of the CS, ICS, BCS, and IBCS algorithms are 3.67, 2.67, 1.5, and 1.17, respectively. For the six benchmark test functions of 50 dimensions, moreover, the average rankings of the CS, ICS, BCS, and IBCS algorithms are 2.83, 2.5, 1.67, and 1.0, respectively. Confirmed by our case study, the performance of the ABCS algorithm was better than that of standard CS, ICS or BCS algorithms in the process of EDM. For example, under the single-objective optimization convergence of MRR, the iteration number (13 iterations) of the CS algorithm for the input process parameters, such as discharge current, pulse-on time, pulse-off time, and servo voltage, was twice that (6 iterations) of the IBCS algorithm. Similar, the iteration number (17 iterations) of BCS algorithm for these parameters was twice that (8 iterations) of the IBCS algorithm under the single-objective optimization convergence of Ra. Therefore, it strengthens the CS algorithm’s accuracy and convergence speed.


Sign in / Sign up

Export Citation Format

Share Document