Multimodal Optimization Using a Bi-Objective Evolutionary Algorithm

2012 ◽  
Vol 20 (1) ◽  
pp. 27-62 ◽  
Author(s):  
Kalyanmoy Deb ◽  
Amit Saha

In a multimodal optimization task, the main purpose is to find multiple optimal solutions (global and local), so that the user can have better knowledge about different optimal solutions in the search space and as and when needed, the current solution may be switched to another suitable optimum solution. To this end, evolutionary optimization algorithms (EA) stand as viable methodologies mainly due to their ability to find and capture multiple solutions within a population in a single simulation run. With the preselection method suggested in 1970, there has been a steady suggestion of new algorithms. Most of these methodologies employed a niching scheme in an existing single-objective evolutionary algorithm framework so that similar solutions in a population are deemphasized in order to focus and maintain multiple distant yet near-optimal solutions. In this paper, we use a completely different strategy in which the single-objective multimodal optimization problem is converted into a suitable bi-objective optimization problem so that all optimal solutions become members of the resulting weak Pareto-optimal set. With the modified definitions of domination and different formulations of an artificially created additional objective function, we present successful results on problems with as large as 500 optima. Most past multimodal EA studies considered problems having only a few variables. In this paper, we have solved up to 16-variable test problems having as many as 48 optimal solutions and for the first time suggested multimodal constrained test problems which are scalable in terms of number of optima, constraints, and variables. The concept of using bi-objective optimization for solving single-objective multimodal optimization problems seems novel and interesting, and more importantly opens up further avenues for research and application.

2021 ◽  
Vol 12 (4) ◽  
pp. 81-100
Author(s):  
Yao Peng ◽  
Zepeng Shen ◽  
Shiqi Wang

Multimodal optimization problem exists in multiple global and many local optimal solutions. The difficulty of solving these problems is finding as many local optimal peaks as possible on the premise of ensuring global optimal precision. This article presents adaptive grouping brainstorm optimization (AGBSO) for solving these problems. In this article, adaptive grouping strategy is proposed for achieving adaptive grouping without providing any prior knowledge by users. For enhancing the diversity and accuracy of the optimal algorithm, elite reservation strategy is proposed to put central particles into an elite pool, and peak detection strategy is proposed to delete particles far from optimal peaks in the elite pool. Finally, this article uses testing functions with different dimensions to compare the convergence, accuracy, and diversity of AGBSO with BSO. Experiments verify that AGBSO has great localization ability for local optimal solutions while ensuring the accuracy of the global optimal solutions.


2011 ◽  
Vol 421 ◽  
pp. 559-563
Author(s):  
Yong Chao Gao ◽  
Li Mei Liu ◽  
Heng Qian ◽  
Ding Wang

The scale and complexity of search space are important factors deciding the solving difficulty of an optimization problem. The information of solution space may lead searching to optimal solutions. Based on this, an algorithm for combinatorial optimization is proposed. This algorithm makes use of the good solutions found by intelligent algorithms, contracts the search space and partitions it into one or several optimal regions by backbones of combinatorial optimization solutions. And optimization of small-scale problems is carried out in optimal regions. Statistical analysis is not necessary before or through the solving process in this algorithm, and solution information is used to estimate the landscape of search space, which enhances the speed of solving and solution quality. The algorithm breaks a new path for solving combinatorial optimization problems, and the results of experiments also testify its efficiency.


2005 ◽  
Vol 13 (4) ◽  
pp. 501-525 ◽  
Author(s):  
Kalyanmoy Deb ◽  
Manikanth Mohan ◽  
Shikhar Mishra

Since the suggestion of a computing procedure of multiple Pareto-optimal solutions in multi-objective optimization problems in the early Nineties, researchers have been on the look out for a procedure which is computationally fast and simultaneously capable of finding a well-converged and well-distributed set of solutions. Most multi-objective evolutionary algorithms (MOEAs) developed in the past decade are either good for achieving a well-distributed solutions at the expense of a large computational effort or computationally fast at the expense of achieving a not-so-good distribution of solutions. For example, although the Strength Pareto Evolutionary Algorithm or SPEA (Zitzler and Thiele, 1999) produces a much better distribution compared to the elitist non-dominated sorting GA or NSGA-II (Deb et al., 2002a), the computational time needed to run SPEA is much greater. In this paper, we evaluate a recently-proposed steady-state MOEA (Deb et al., 2003) which was developed based on the ε-dominance concept introduced earlier (Laumanns et al., 2002) and using efficient parent and archive update strategies for achieving a well-distributed and well-converged set of solutions quickly. Based on an extensive comparative study with four other state-of-the-art MOEAs on a number of two, three, and four objective test problems, it is observed that the steady-state MOEA is a good compromise in terms of convergence near to the Pareto-optimal front, diversity of solutions, and computational time. Moreover, the ε-MOEA is a step closer towards making MOEAs pragmatic, particularly allowing a decision-maker to control the achievable accuracy in the obtained Pareto-optimal solutions.


2017 ◽  
Vol 25 (3) ◽  
pp. 439-471 ◽  
Author(s):  
Ali Ahrari ◽  
Kalyanmoy Deb ◽  
Mike Preuss

During the recent decades, many niching methods have been proposed and empirically verified on some available test problems. They often rely on some particular assumptions associated with the distribution, shape, and size of the basins, which can seldom be made in practical optimization problems. This study utilizes several existing concepts and techniques, such as taboo points, normalized Mahalanobis distance, and the Ursem’s hill-valley function in order to develop a new tool for multimodal optimization, which does not make any of these assumptions. In the proposed method, several subpopulations explore the search space in parallel. Offspring of a subpopulation are forced to maintain a sufficient distance to the center of fitter subpopulations and the previously identified basins, which are marked as taboo points. The taboo points repel the subpopulation to prevent convergence to the same basin. A strategy to update the repelling power of the taboo points is proposed to address the challenge of basins of dissimilar size. The local shape of a basin is also approximated by the distribution of the subpopulation members converging to that basin. The proposed niching strategy is incorporated into the covariance matrix self-adaptation evolution strategy (CMSA-ES), a potent global optimization method. The resultant method, called the covariance matrix self-adaptation with repelling subpopulations (RS-CMSA), is assessed and compared to several state-of-the-art niching methods on a standard test suite for multimodal optimization. An organized procedure for parameter setting is followed which assumes a rough estimation of the desired/expected number of minima available. Performance sensitivity to the accuracy of this estimation is also studied by introducing the concept of robust mean peak ratio. Based on the numerical results using the available and the introduced performance measures, RS-CMSA emerges as the most successful method when robustness and efficiency are considered at the same time.


2014 ◽  
Vol 2014 ◽  
pp. 1-20 ◽  
Author(s):  
Erik Cuevas ◽  
Adolfo Reyna-Orta

Interest in multimodal optimization is expanding rapidly, since many practical engineering problems demand the localization of multiple optima within a search space. On the other hand, the cuckoo search (CS) algorithm is a simple and effective global optimization algorithm which can not be directly applied to solve multimodal optimization problems. This paper proposes a new multimodal optimization algorithm called the multimodal cuckoo search (MCS). Under MCS, the original CS is enhanced with multimodal capacities by means of (1) the incorporation of a memory mechanism to efficiently register potential local optima according to their fitness value and the distance to other potential solutions, (2) the modification of the original CS individual selection strategy to accelerate the detection process of new local minima, and (3) the inclusion of a depuration procedure to cyclically eliminate duplicated memory elements. The performance of the proposed approach is compared to several state-of-the-art multimodal optimization algorithms considering a benchmark suite of fourteen multimodal problems. Experimental results indicate that the proposed strategy is capable of providing better and even a more consistent performance over existing well-known multimodal algorithms for the majority of test problems yet avoiding any serious computational deterioration.


2018 ◽  
Vol 10 (2) ◽  
pp. 77 ◽  
Author(s):  
Abdoulaye Compaoré ◽  
Kounhinir Somé ◽  
Joseph Poda ◽  
Blaise Somé

In this paper, we propose a novel approach for solving some fully fuzzy L-R triangular multiobjective linear optimization programs using MOMA-plus method (Kounhinir, 2017). This approach is composed of two relevant steps such as the converting of the fully fuzzy L-R triangular multiobjective linear optimization problem into a deterministic multiobjective linear optimization and the applying of the adapting MOMA-plus method. The initial version of MOMA-plus method is designed for multiobjective deterministic optimization (Kounhinir, 2017) and having already been tested on the single-objective fuzzy programs (Abdoulaye, 2017). Our new method allow to find all of the Pareto optimal solutions of a fully fuzzy L-R triangular multiobjective linear optimization problems obtained after conversion. For highlighting the efficiency of our approach a didactic numerical example is dealt with and obtained solutions are compared to Total Objective Segregation Method proposed by Jayalakslmi and Pandia (Jayalakslmi 2014).


2018 ◽  
Vol 22 ◽  
pp. 01009 ◽  
Author(s):  
Fırat Evirgen ◽  
Mehmet Yavuz

In this study, a fractional mathematical model with steepest descent direction is proposed to find optimal solutions for a class of nonlinear programming problem. In this sense, Caputo-Fabrizio derivative is adapted to the mathematical model. To demonstrate the solution trajectory of the mathematical model, we use the multistage variational iteration method (MVIM). Numerical simulations and comparisons on some test problems show that the mathematical model generated using Caputo-Fabrizio fractional derivative is both feasible and efficient to find optimal solutions for a certain class of equality constrained optimization problems.


2004 ◽  
Vol 12 (1) ◽  
pp. 77-98 ◽  
Author(s):  
Sanyou Y. Zeng ◽  
Lishan S. Kang ◽  
Lixin X. Ding

In this paper, an orthogonal multi-objective evolutionary algorithm (OMOEA) is proposed for multi-objective optimization problems (MOPs) with constraints. Firstly, these constraints are taken into account when determining Pareto dominance. As a result, a strict partial-ordered relation is obtained, and feasibility is not considered later in the selection process. Then, the orthogonal design and the statistical optimal method are generalized to MOPs, and a new type of multi-objective evolutionary algorithm (MOEA) is constructed. In this framework, an original niche evolves first, and splits into a group of sub-niches. Then every sub-niche repeats the above process. Due to the uniformity of the search, the optimality of the statistics, and the exponential increase of the splitting frequency of the niches, OMOEA uses a deterministic search without blindness or stochasticity. It can soon yield a large set of solutions which converges to the Pareto-optimal set with high precision and uniform distribution. We take six test problems designed by Deb, Zitzler et al., and an engineering problem (W) with constraints provided by Ray et al. to test the new technique. The numerical experiments show that our algorithm is superior to other MOGAS and MOEAs, such as FFGA, NSGAII, SPEA2, and so on, in terms of the precision, quantity and distribution of solutions. Notably, for the engineering problem W, it finds the Pareto-optimal set, which was previously unknown.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Xu-Tao Zhang ◽  
Biao Xu ◽  
Wei Zhang ◽  
Jun Zhang ◽  
Xin-fang Ji

Various black-box optimization problems in real world can be classified as multimodal optimization problems. Neighborhood information plays an important role in improving the performance of an evolutionary algorithm when dealing with such problems. In view of this, we propose a particle swarm optimization algorithm based on dynamic neighborhood to solve the multimodal optimization problem. In this paper, a dynamic ε-neighborhood selection mechanism is first defined to balance the exploration and exploitation of the algorithm. Then, based on the information provided by the neighborhoods, four different particle position updating strategies are designed to further support the algorithm’s exploration and exploitation of the search space. Finally, the proposed algorithm is compared with 7 state-of-the-art multimodal algorithms on 8 benchmark instances. The experimental results reveal that the proposed algorithm is superior to the compared ones and is an effective method to tackle multimodal optimization problems.


Sign in / Sign up

Export Citation Format

Share Document