The Relationship between Metaheuristics Stopping Criteria and Performances

2014 ◽  
Vol 5 (3) ◽  
pp. 44-70 ◽  
Author(s):  
Mohamed-Mahmoud Ould Sidi ◽  
Bénédicte Quilot-Turion ◽  
Abdeslam Kadrani ◽  
Michel Génard ◽  
Françoise Lescourret

A major difficulty in the use of metaheuristics (i.e. evolutionary and particle swarm algorithms) to deal with multi-objective optimization problems is the choice of a convenient point at which to stop computation. Indeed, it is difficult to find the best compromise between the stopping criterion and the algorithm performance. This paper addresses this issue using the Non-dominated Sorting Genetic Algorithm (NSGA-II) and the Multi-Objective Particle Swarm Optimization with Crowding Distance (MOPSO-CD) for the model-based design of sustainable peach fruits. The optimization problem of interest contains three objectives: maximize fruit fresh mass, maximize fruit sugar content, and minimize the crack density on the fruit skin. This last objective targets a reduction in the use of fungicides and can thus enhance preservation of the environment and human health. Two versions of the NSGA-II and two of the MOPSO-CD were applied to tackle this difficult optimization problem: the original versions with a maximum number of generations used as stopping criterion and modified versions using the stopping criterion proposed by the authors of (Roudenko & Schoenauer, 2004). This second stopping criterion is based on the stabilization of the maximal crowding distance and aims to stop computation when many generations are performed without further improvement in the quality of the solutions. A benchmark consisting of four plant management scenarios has been used to compare the performances of the original versions (OV) and the modified versions (MV) of the NSGA-II and the MOPSO-CD. Twelve independent simulations were performed for each version and for each scenario, and a high number of generations were generated for the OV (e.g., 1500 for the NSGA-II and 200 for the MOPSO-CD). This paper compares the performances of the two versions of both algorithms using four standard metrics and statistical tests. For both algorithms, the MV obtained solutions similar in quality to those of the OV but after significantly fewer generations. The resulting reduction in computational time for the optimization step will provide opportunities for further studies on the sustainability of peach ideotypes.

2018 ◽  
Vol 9 (4) ◽  
pp. 71-96 ◽  
Author(s):  
Swapnil Prakash Kapse ◽  
Shankar Krishnapillai

This article demonstrates the implementation of a novel local search approach based on Utopia point guided search, thus improving the exploration ability of multi- objective Particle Swarm Optimization. This strategy searches for best particles based on the criteria of seeking solutions closer to the Utopia point, thus improving the convergence to the Pareto-optimal front. The elite non-dominated selected particles are stored in an archive and updated at every iteration based on least crowding distance criteria. The leader is chosen among the candidates in the archive using the same guided search. From the simulation results based on many benchmark tests, the new algorithm gives better convergence and diversity when compared to existing several algorithms such as NSGA-II, CMOPSO, SMPSO, PSNS, DE+MOPSO and AMALGAM. Finally, the proposed algorithm is used to solve mechanical design based multi-objective optimization problems from the literature, where it shows the same advantages.


2005 ◽  
Vol 13 (4) ◽  
pp. 501-525 ◽  
Author(s):  
Kalyanmoy Deb ◽  
Manikanth Mohan ◽  
Shikhar Mishra

Since the suggestion of a computing procedure of multiple Pareto-optimal solutions in multi-objective optimization problems in the early Nineties, researchers have been on the look out for a procedure which is computationally fast and simultaneously capable of finding a well-converged and well-distributed set of solutions. Most multi-objective evolutionary algorithms (MOEAs) developed in the past decade are either good for achieving a well-distributed solutions at the expense of a large computational effort or computationally fast at the expense of achieving a not-so-good distribution of solutions. For example, although the Strength Pareto Evolutionary Algorithm or SPEA (Zitzler and Thiele, 1999) produces a much better distribution compared to the elitist non-dominated sorting GA or NSGA-II (Deb et al., 2002a), the computational time needed to run SPEA is much greater. In this paper, we evaluate a recently-proposed steady-state MOEA (Deb et al., 2003) which was developed based on the ε-dominance concept introduced earlier (Laumanns et al., 2002) and using efficient parent and archive update strategies for achieving a well-distributed and well-converged set of solutions quickly. Based on an extensive comparative study with four other state-of-the-art MOEAs on a number of two, three, and four objective test problems, it is observed that the steady-state MOEA is a good compromise in terms of convergence near to the Pareto-optimal front, diversity of solutions, and computational time. Moreover, the ε-MOEA is a step closer towards making MOEAs pragmatic, particularly allowing a decision-maker to control the achievable accuracy in the obtained Pareto-optimal solutions.


Author(s):  
Tingting Xia ◽  
Mian Li

Abstract Multi-objective optimization problems (MOOPs) with uncertainties are common in engineering design. To find robust Pareto fronts, multi-objective robust optimization (MORO) methods with inner–outer optimization structures usually have high computational complexity, which is a critical issue. Generally, in design problems, robust Pareto solutions lie somewhere closer to nominal Pareto points compared with randomly initialized points. The searching process for robust solutions could be more efficient if starting from nominal Pareto points. We propose a new method sequentially approaching to the robust Pareto front (SARPF) from the nominal Pareto points where MOOPs with uncertainties are solved in two stages. The deterministic optimization problem and robustness metric optimization are solved in the first stage, where nominal Pareto solutions and the robust-most solutions are identified, respectively. In the second stage, a new single-objective robust optimization problem is formulated to find the robust Pareto solutions starting from the nominal Pareto points in the region between the nominal Pareto front and robust-most points. The proposed SARPF method can reduce a significant amount of computational time since the optimization process can be performed in parallel at each stage. Vertex estimation is also applied to approximate the worst-case uncertain parameter values, which can reduce computational efforts further. The global solvers, NSGA-II for multi-objective cases and genetic algorithm (GA) for single-objective cases, are used in corresponding optimization processes. Three examples with the comparison with results from the previous method are presented to demonstrate the applicability and efficiency of the proposed method.


Author(s):  
Todd Letcher ◽  
M.-H. Herman Shen

A multi-objective robust optimization framework that incorporates a robustness index for each objective has been developed in a bi-level approach. The top level of the framework consists of the standard optimization problem formulation with the addition of a robustness constraint. The bottom level uses the Worst Case Sensitivity Region (WCSR) concept previously developed to solve single objective robust optimization problems. In this framework, a separate robustness index for each objective allows the designer to choose the importance of each objective. The method is demonstrated on a commonly studied two-bar truss structural optimization problem. The results of the problem demonstrate the effectiveness and usefulness of the multiple robustness index capabilities added to this framework. A multi-objective genetic algorithm, NSGA-II, is used in both levels of the framework.


2021 ◽  
Vol 11 (2) ◽  
pp. 835 ◽  
Author(s):  
Chunyu Liang ◽  
Xin Xu ◽  
Heping Chen ◽  
Wensheng Wang ◽  
Kunkun Zheng ◽  
...  

Asphalt mixture proportion design is one of the most important steps in asphalt pavement design and application. This study proposes a novel multi-objective particle swarm optimization (MOPSO) algorithm employing the Gaussian process regression (GPR)-based machine learning (ML) method for multi-variable, multi-level optimization problems with multiple constraints. First, the GPR-based ML method is proposed to model the objective and constraint functions without the explicit relationships between variables and objectives. In the optimization step, the metaheuristic algorithm based on adaptive weight multi-objective particle swarm optimization (AWMOPSO) is used to achieve the global optimal solution, which is very efficient for the objectives and constraints without mathematical relationships. The results showed that the optimal GPR model could describe the relationship between variables and objectives well in terms of root-mean-square error (RMSE) and R2. After the optimization by the proposed GPR-AWMOPSO algorithm, the comprehensive pavement performances were enhanced in terms of the permanent deformation resistance at high temperature, crack resistance at low temperature as well as moisture stability. Therefore, the proposed GPR-AWMOPSO algorithm is the best option and efficient for maximizing the performances of composite modified asphalt mixture. The GPR-AWMOPSO algorithm has advantages of less computational time and fewer samples, higher accuracy, etc. over traditional laboratory-based experimental methods, which can serve as guidance for the proportion optimization design of asphalt pavement.


2020 ◽  
Vol 9 (4) ◽  
pp. 236
Author(s):  
Xiaolan Li ◽  
Bingbo Gao ◽  
Zhongke Bai ◽  
Yuchun Pan ◽  
Yunbing Gao

Complex geographical spatial sampling usually encounters various multi-objective optimization problems, for which effective multi-objective optimization algorithms are much needed to help advance the field. To improve the computational efficiency of the multi-objective optimization process, the archived multi-objective simulated annealing (AMOSA)-II method is proposed as an improved parallelized multi-objective optimization method for complex geographical spatial sampling. Based on the AMOSA method, multiple Markov chains are used to extend the traditional single Markov chain; multi-core parallelization technology is employed based on multi-Markov chains. The tabu-archive constraint is designed to avoid repeated searches for optimal solutions. Two cases were investigated: one with six typical traditional test problems, and the other for soil spatial sampling optimization applications. Six performance indices of the two cases were analyzed—computational time, convergence, purity, spacing, min-spacing and displacement. The results revealed that AMOSA-II performed better which was more effective in obtaining preferable optimal solutions compared with AMOSA and NSGA-II. AMOSA-II can be treated as a feasible means to apply in other complex geographical spatial sampling optimizations.


2011 ◽  
Vol 474-476 ◽  
pp. 1808-1812
Author(s):  
Bo Fu ◽  
Yi Jing ◽  
Xuan Fu ◽  
Tobias Hemsel

The multi-objective optimal design of a piezoelectric sandwich ultrasonic transducer is studied. The maximum vibration amplitude and the minimum electrical input power are considered as optimization objectives. Design variables involve continuous variables (dimensions of the transducer) and discrete variables (material types). Based on analytical models, the optimal design is formulated as a constrained multi-objective optimization problem. The optimization problem is then solved by using the elitist non-dominated sorting genetic algorithm (NSGA-II) and Pareto-optimal designs are obtained. The optimized results are analyzed and the preferred design is proposed. The optimization procedure presented in this contribution can be applied in multi-objective optimization problems of other piezoelectric transducers.


Author(s):  
Alwatben Batoul Rashed ◽  
Hazlina Hamdan ◽  
Nurfadhlina Mohd Sharef ◽  
Md Nasir Sulaiman ◽  
Razali Yaakob ◽  
...  

Clustering, an unsupervised method of grouping sets of data, is used as a solution technique in various fields to divide and restructure data to become more significant and transform them into more useful information. Generally, clustering is difficult and complex phenomenon, where the appropriate numbers of clusters are always unknown, comes with a large number of potential solutions, and as well the datasets are unsupervised. These problems can be addressed by the Multi-Objective Particle Swarm Optimization (MOPSO) approach, which is commonly used in addressing optimization problems. However, MOPSO algorithm produces a group of non-dominated solutions which make the selection of an “appropriate” Pareto optimal or non-dominated solution more difficult. According to the literature, crowding distance is one of the most efficient algorithms that was developed based on density measures to treat the problem of selection mechanism for archive updates. In an attempt to address this problem, the clustering-based method that utilizes crowding distance (CD) technique to balance the optimality of the objectives in Pareto optimal solution search is proposed. The approach is based on the dominance concept and crowding distances mechanism to guarantee survival of the best solution. Furthermore, we used the Pareto dominance concept after calculating the value of crowding degree for each solution. The proposed method was evaluated against five clustering approaches that have succeeded in optimization that comprises of K-means Clustering, MCPSO, IMCPSO, Spectral clustering, Birch, and average-link algorithms. The results of the evaluation show that the proposed approach exemplified the state-of-the-art method with significant differences in most of the datasets tested.


2021 ◽  
Vol 26 (2) ◽  
pp. 36
Author(s):  
Alejandro Estrada-Padilla ◽  
Daniela Lopez-Garcia ◽  
Claudia Gómez-Santillán ◽  
Héctor Joaquín Fraire-Huacuja ◽  
Laura Cruz-Reyes ◽  
...  

A common issue in the Multi-Objective Portfolio Optimization Problem (MOPOP) is the presence of uncertainty that affects individual decisions, e.g., variations on resources or benefits of projects. Fuzzy numbers are successful in dealing with imprecise numerical quantities, and they found numerous applications in optimization. However, so far, they have not been used to tackle uncertainty in MOPOP. Hence, this work proposes to tackle MOPOP’s uncertainty with a new optimization model based on fuzzy trapezoidal parameters. Additionally, it proposes three novel steady-state algorithms as the model’s solution process. One approach integrates the Fuzzy Adaptive Multi-objective Evolutionary (FAME) methodology; the other two apply the Non-Dominated Genetic Algorithm (NSGA-II) methodology. One steady-state algorithm uses the Spatial Spread Deviation as a density estimator to improve the Pareto fronts’ distribution. This research work’s final contribution is developing a new defuzzification mapping that allows measuring algorithms’ performance using widely known metrics. The results show a significant difference in performance favoring the proposed steady-state algorithm based on the FAME methodology.


Symmetry ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 136
Author(s):  
Wenxiao Li ◽  
Yushui Geng ◽  
Jing Zhao ◽  
Kang Zhang ◽  
Jianxin Liu

This paper explores the combination of a classic mathematical function named “hyperbolic tangent” with a metaheuristic algorithm, and proposes a novel hybrid genetic algorithm called NSGA-II-BnF for multi-objective decision making. Recently, many metaheuristic evolutionary algorithms have been proposed for tackling multi-objective optimization problems (MOPs). These algorithms demonstrate excellent capabilities and offer available solutions to decision makers. However, their convergence performance may be challenged by some MOPs with elaborate Pareto fronts such as CFs, WFGs, and UFs, primarily due to the neglect of diversity. We solve this problem by proposing an algorithm with elite exploitation strategy, which contains two parts: first, we design a biased elite allocation strategy, which allocates computation resources appropriately to elites of the population by crowding distance-based roulette. Second, we propose a self-guided fast individual exploitation approach, which guides elites to generate neighbors by a symmetry exploitation operator, which is based on mathematical hyperbolic tangent function. Furthermore, we designed a mechanism to emphasize the algorithm’s applicability, which allows decision makers to adjust the exploitation intensity with their preferences. We compare our proposed NSGA-II-BnF with four other improved versions of NSGA-II (NSGA-IIconflict, rNSGA-II, RPDNSGA-II, and NSGA-II-SDR) and four competitive and widely-used algorithms (MOEA/D-DE, dMOPSO, SPEA-II, and SMPSO) on 36 test problems (DTLZ1–DTLZ7, WGF1–WFG9, UF1–UF10, and CF1–CF10), and measured using two widely used indicators—inverted generational distance (IGD) and hypervolume (HV). Experiment results demonstrate that NSGA-II-BnF exhibits superior performance to most of the algorithms on all test problems.


Sign in / Sign up

Export Citation Format

Share Document