Empirical and analytical study of many-objective optimization problems: analysing distribution of nondominated solutions and population size for scalability of randomized heuristics

2014 ◽  
Vol 6 (2) ◽  
pp. 133-145 ◽  
Author(s):  
Ramprasad Joshi ◽  
Bharat Deshpande
Author(s):  
Prachi Agrawal ◽  
Talari Ganesh ◽  
Ali Wagdy Mohamed

AbstractThis article proposes a novel binary version of recently developed Gaining Sharing knowledge-based optimization algorithm (GSK) to solve binary optimization problems. GSK algorithm is based on the concept of how humans acquire and share knowledge during their life span. A binary version of GSK named novel binary Gaining Sharing knowledge-based optimization algorithm (NBGSK) depends on mainly two binary stages: binary junior gaining sharing stage and binary senior gaining sharing stage with knowledge factor 1. These two stages enable NBGSK for exploring and exploitation of the search space efficiently and effectively to solve problems in binary space. Moreover, to enhance the performance of NBGSK and prevent the solutions from trapping into local optima, NBGSK with population size reduction (PR-NBGSK) is introduced. It decreases the population size gradually with a linear function. The proposed NBGSK and PR-NBGSK applied to set of knapsack instances with small and large dimensions, which shows that NBGSK and PR-NBGSK are more efficient and effective in terms of convergence, robustness, and accuracy.


2015 ◽  
Vol 2015 ◽  
pp. 1-10 ◽  
Author(s):  
V. Gonuguntla ◽  
R. Mallipeddi ◽  
Kalyana C. Veluvolu

Differential evolution (DE) is simple and effective in solving numerous real-world global optimization problems. However, its effectiveness critically depends on the appropriate setting of population size and strategy parameters. Therefore, to obtain optimal performance the time-consuming preliminary tuning of parameters is needed. Recently, different strategy parameter adaptation techniques, which can automatically update the parameters to appropriate values to suit the characteristics of optimization problems, have been proposed. However, most of the works do not control the adaptation of the population size. In addition, they try to adapt each strategy parameters individually but do not take into account the interaction between the parameters that are being adapted. In this paper, we introduce a DE algorithm where both strategy parameters are self-adapted taking into account the parameter dependencies by means of a multivariate probabilistic technique based on Gaussian Adaptation working on the parameter space. In addition, the proposed DE algorithm starts by sampling a huge number of sample solutions in the search space and in each generation a constant number of individuals from huge sample set are adaptively selected to form the population that evolves. The proposed algorithm is evaluated on 14 benchmark problems of CEC 2005 with different dimensionality.


2018 ◽  
Vol 9 (4) ◽  
pp. 1-20
Author(s):  
Breno A. M. Menezes ◽  
Fabian Wrede ◽  
Herbert Kuchen ◽  
Fernando B. Lima Neto

Swarm intelligence (SI) algorithms are handy tools for solving complex optimization problems. When problems grow in size and complexity, an increase in population or number of iterations might be required in order to achieve a good solution. These adjustments also impact the execution time. This article investigates the trade-off involving population size, number of iterations and problem complexity, aiming to improve the efficiency of SI algorithms. Results based on a parallel implementation of Fish School Search show that increasing the population size is beneficial for finding good solutions. However, we observed an asymptotic behavior, i.e. increasing the population over a certain threshold only leads to slight improvements. Furthermore, the execution time was analyzed.


Information ◽  
2019 ◽  
Vol 10 (12) ◽  
pp. 390 ◽  
Author(s):  
Ahmad Hassanat ◽  
Khalid Almohammadi ◽  
Esra’a Alkafaween ◽  
Eman Abunawas ◽  
Awni Hammouri ◽  
...  

Genetic algorithm (GA) is an artificial intelligence search method that uses the process of evolution and natural selection theory and is under the umbrella of evolutionary computing algorithm. It is an efficient tool for solving optimization problems. Integration among (GA) parameters is vital for successful (GA) search. Such parameters include mutation and crossover rates in addition to population that are important issues in (GA). However, each operator of GA has a special and different influence. The impact of these factors is influenced by their probabilities; it is difficult to predefine specific ratios for each parameter, particularly, mutation and crossover operators. This paper reviews various methods for choosing mutation and crossover ratios in GAs. Next, we define new deterministic control approaches for crossover and mutation rates, namely Dynamic Decreasing of high mutation ratio/dynamic increasing of low crossover ratio (DHM/ILC), and Dynamic Increasing of Low Mutation/Dynamic Decreasing of High Crossover (ILM/DHC). The dynamic nature of the proposed methods allows the ratios of both crossover and mutation operators to be changed linearly during the search progress, where (DHM/ILC) starts with 100% ratio for mutations, and 0% for crossovers. Both mutation and crossover ratios start to decrease and increase, respectively. By the end of the search process, the ratios will be 0% for mutations and 100% for crossovers. (ILM/DHC) worked the same but the other way around. The proposed approach was compared with two parameters tuning methods (predefined), namely fifty-fifty crossover/mutation ratios, and the most common approach that uses static ratios such as (0.03) mutation rates and (0.9) crossover rates. The experiments were conducted on ten Traveling Salesman Problems (TSP). The experiments showed the effectiveness of the proposed (DHM/ILC) when dealing with small population size, while the proposed (ILM/DHC) was found to be more effective when using large population size. In fact, both proposed dynamic methods outperformed the predefined methods compared in most cases tested.


2020 ◽  
Vol 28 (1) ◽  
pp. 55-85
Author(s):  
Bo Song ◽  
Victor O.K. Li

Infinite population models are important tools for studying population dynamics of evolutionary algorithms. They describe how the distributions of populations change between consecutive generations. In general, infinite population models are derived from Markov chains by exploiting symmetries between individuals in the population and analyzing the limit as the population size goes to infinity. In this article, we study the theoretical foundations of infinite population models of evolutionary algorithms on continuous optimization problems. First, we show that the convergence proofs in a widely cited study were in fact problematic and incomplete. We further show that the modeling assumption of exchangeability of individuals cannot yield the transition equation. Then, in order to analyze infinite population models, we build an analytical framework based on convergence in distribution of random elements which take values in the metric space of infinite sequences. The framework is concise and mathematically rigorous. It also provides an infrastructure for studying the convergence of the stacking of operators and of iterating the algorithm which previous studies failed to address. Finally, we use the framework to prove the convergence of infinite population models for the mutation operator and the [Formula: see text]-ary recombination operator. We show that these operators can provide accurate predictions for real population dynamics as the population size goes to infinity, provided that the initial population is identically and independently distributed.


2011 ◽  
Vol 19 (3) ◽  
pp. 345-371 ◽  
Author(s):  
Daniel Karapetyan ◽  
Gregory Gutin

Memetic algorithms are known to be a powerful technique in solving hard optimization problems. To design a memetic algorithm, one needs to make a host of decisions. Selecting the population size is one of the most important among them. Most of the algorithms in the literature fix the population size to a certain constant value. This reduces the algorithm's quality since the optimal population size varies for different instances, local search procedures, and runtimes. In this paper we propose an adjustable population size. It is calculated as a function of the runtime of the whole algorithm and the average runtime of the local search for the given instance. Note that in many applications the runtime of a heuristic should be limited and, therefore, we use this bound as a parameter of the algorithm. The average runtime of the local search procedure is measured during the algorithm's run. Some coefficients which are independent of the instance and the local search are to be tuned at the design time; we provide a procedure to find these coefficients. The proposed approach was used to develop a memetic algorithm for the multidimensional assignment problem (MAP). We show that our adjustable population size makes the algorithm flexible to perform efficiently for a wide range of running times and local searches and this does not require any additional tuning of the algorithm.


2012 ◽  
Vol 538-541 ◽  
pp. 3074-3078
Author(s):  
Yi Liu ◽  
Cai Hong Mu ◽  
Wei Dong Kou ◽  
Jing Liu

This paper presents a variant of the particle swarm optimization (PSO) that we call the adaptive particle swarm optimization with dynamic population (DP-APSO), which adopts a novel dynamic population (DP) strategy whereby the population size of swarm can vary with the evolutionary process. The DP strategy enables the population size to increase when the swarm converges and decrease when the swarm disperses. Experiments were conducted on two well-studied constrained engineering design optimization problems. The results demonstrate better performance of the DP-APSO in solving these engineering design optimization problems when compared with two other evolutionary computation algorithms.


Author(s):  
MIŁOSZ KADZIŃSKI ◽  
ROMAN SŁOWIŃSKI

We introduce a new interactive procedure for multiple objective optimization problems. The identification of the most preferred solution is achieved by means of a systematic dialogue with the decision maker (DM) during which (s)he specifies pairwise comparisons of nondominated solutions from a current sample. We represent this preference information by a compatible form of the achievement scalarizing function, i.e., we are searching for weights of objectives which ensure that the reference solutions are compared by the function in the same way as by the DM. Directions of the isoquants of all compatible achievement scalarizing functions create a cone in the evaluation space, with the origin in a reference point. In successive iterations, each new pairwise comparison of solutions contracts the cone which is zooming on a subregion of nondominated points of greatest interest for the DM. The procedure ends when at least one satisfactory solution is selected or when the DM comes to conclusion that there is no such solution for the current problem setting.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Wen Zhong ◽  
Jian Xiong ◽  
Anping Lin ◽  
Lining Xing ◽  
Feilong Chen ◽  
...  

Multiobjective evolutionary algorithms (MOEAs) have witnessed prosperity in solving many-objective optimization problems (MaOPs) over the past three decades. Unfortunately, no one single MOEA equipped with given parameter settings, mating-variation operator, and environmental selection mechanism is suitable for obtaining a set of solutions with excellent convergence and diversity for various types of MaOPs. The reality is that different MOEAs show great differences in handling certain types of MaOPs. Aiming at these characteristics, this paper proposes a flexible ensemble framework, namely, ASES, which is highly scalable for embedding any number of MOEAs to promote their advantages. To alleviate the undesirable phenomenon that some promising solutions are discarded during the evolution process, a big archive that number of contained solutions be far larger than population size is integrated into this ensemble framework to record large-scale nondominated solutions, and also an efficient maintenance strategy is developed to update the archive. Furthermore, the knowledge coming from updating archive is exploited to guide the evolutionary process for different MOEAs, allocating limited computational resources for efficient algorithms. A large number of numerical experimental studies demonstrated superior performance of the proposed ASES. Among 52 test instances, the ASES performs better than all the six baseline algorithms on at least half of the test instances with respect to both metrics hypervolume and inverted generational distance.


Sign in / Sign up

Export Citation Format

Share Document