scholarly journals Truss Sizing Optimization with a Diversity-Enhanced Cyclic Neighborhood Network Topology Particle Swarm Optimizer

Mathematics ◽  
2020 ◽  
Vol 8 (7) ◽  
pp. 1087
Author(s):  
Tae-Hyoung Kim ◽  
Jung-In Byun

This study presents a reliable particle swarm optimizer for sizing optimization of truss structures. This population-based stochastic optimization approach is based on the principle that each particle communicates its position and function value to a number of successively numbered neighboring particles via a fixed cyclic interaction structure. Therefore, such a neighborhood structure changes the movement pattern of the entire swarm, and allows each particle’s movement not to be driven by one global best particle position, which enhances the diversification attitude. Further, by transforming the objective function, it is possible to steer the search towards feasible regions of design space. The efficiency of the proposed approach is demonstrated by solving four classical sizing optimization problems of truss structures.

2017 ◽  
Vol 23 (8) ◽  
pp. 985-1001 ◽  
Author(s):  
Ali MORTAZAVI ◽  
Vedat TOĞAN ◽  
Ayhan NUHOĞLU

This study investigates the performances of the integrated particle swarm optimizer (iPSO) algorithm in the layout and sizing optimization of truss structures. The iPSO enhances the standard PSO algorithm employing both the concept of weighted particle and the improved fly-back method to handle optimization constraints. The performance of the recent algorithm is tested on a series of well-known truss structures weight minimization problems including mixed design search spaces (i.e. with both discrete and continuous variables) over various types of constraints (i.e. nodal dis­placements, element stresses and buckling criterion). The results demonstrate the validity of the proposed approach in dealing with combined layout and size optimization problems.


2021 ◽  
Author(s):  
Yulong Sun ◽  
Hongjuan Li ◽  
Mohammad Shabaz ◽  
Amit Sharma

Abstract Optimization methodologies are being utilized in various structural designing practices to solve size, shape and topology optimization problems. A heuristic Particle swarm optimization (HPSO) algorithm was anticipated in this article in order to address the size optimization problem of truss with stress and displacement constraints. This article contributes in improvisation in the truss structure design rationality while reducing the engineering cost by proposing the HPSO approach. Primarily, the basic principle of the original PSO algorithm is presented, then the compression factor is established to improve the PSO algorithm, and a reasonable parameter setting value is presented. To validate the performance of the proposed optimization approach, various experimental illustrations were performed. The results show that the convergence history of experimental illustration 2 and experimental illustration 3 is optimal. The experimental illustration 2 converges after about 150 iterations, however, the experimental illustration 3 is close to the optimal solution after about 500 iterations. Therefore, the PSO algorithm can successfully optimize the size design of truss structures, and the algorithm is also time efficient. The improved PSO algorithm has good convergence and stability, and can effectively optimize the size design of truss structures.


2015 ◽  
Vol 24 (05) ◽  
pp. 1550017 ◽  
Author(s):  
Aderemi Oluyinka Adewumi ◽  
Akugbe Martins Arasomwan

This paper presents an improved particle swarm optimization (PSO) technique for global optimization. Many variants of the technique have been proposed in literature. However, two major things characterize many of these variants namely, static search space and velocity limits, which bound their flexibilities in obtaining optimal solutions for many optimization problems. Furthermore, the problem of premature convergence persists in many variants despite the introduction of additional parameters such as inertia weight and extra computation ability. This paper proposes an improved PSO algorithm without inertia weight. The proposed algorithm dynamically adjusts the search space and velocity limits for the swarm in each iteration by picking the highest and lowest values among all the dimensions of the particles, calculates their absolute values and then uses the higher of the two values to define a new search range and velocity limits for next iteration. The efficiency and performance of the proposed algorithm was shown using popular benchmark global optimization problems with low and high dimensions. Results obtained demonstrate better convergence speed and precision, stability, robustness with better global search ability when compared with six recent variants of the original algorithm.


2021 ◽  
Author(s):  
Ahlem Aboud ◽  
Nizar Rokbani ◽  
Seyedali Mirjalili ◽  
Abdulrahman M. Qahtani ◽  
Omar Almutiry ◽  
...  

<p>Multifactorial Optimization (MFO) and Evolutionary Transfer Optimization (ETO) are new optimization challenging paradigms for which the multi-Objective Particle Swarm Optimization system (MOPSO) may be interesting despite limitations. MOPSO has been widely used in static/dynamic multi-objective optimization problems, while its potentials for multi-task optimization are not completely unveiled. This paper proposes a new Distributed Multifactorial Particle Swarm Optimization algorithm (DMFPSO) for multi-task optimization. This new system has a distributed architecture on a set of sub-swarms that are dynamically constructed based on the number of optimization tasks affected by each particle skill factor. DMFPSO is designed to deal with the issues of handling convergence and diversity concepts separately. DMFPSO uses Beta function to provide two optimized profiles with a dynamic switching behaviour. The first profile, Beta-1, is used for the exploration which aims to explore the search space toward potential solutions, while the second Beta-2 function is used for convergence enhancement. This new system is tested on 36 benchmarks provided by the CEC’2021 Evolutionary Transfer Multi-Objective Optimization Competition. Comparatives with the state-of-the-art methods are done using the Inverted General Distance (IGD) and Mean Inverted General Distance (MIGD) metrics. Based on the MSS metric, this proposal has the best results on most tested problems.</p>


2015 ◽  
pp. 1246-1276
Author(s):  
Wen Fung Leong ◽  
Yali Wu ◽  
Gary G. Yen

Generally, constraint-handling techniques are designed for evolutionary algorithms to solve Constrained Multiobjective Optimization Problems (CMOPs). Most Multiojective Particle Swarm Optimization (MOPSO) designs adopt these existing constraint-handling techniques to deal with CMOPs. In this chapter, the authors present a constrained MOPSO in which the information related to particles' infeasibility and feasibility status is utilized effectively to guide the particles to search for feasible solutions and to improve the quality of the optimal solution found. The updating of personal best archive is based on the particles' Pareto ranks and their constraint violations. The infeasible global best archive is adopted to store infeasible nondominated solutions. The acceleration constants are adjusted depending on the personal bests' and selected global bests' infeasibility and feasibility statuses. The personal bests' feasibility statuses are integrated to estimate the mutation rate in the mutation procedure. The simulation results indicate that the proposed constrained MOPSO is highly competitive in solving selected benchmark problems.


Mathematics ◽  
2019 ◽  
Vol 7 (6) ◽  
pp. 521 ◽  
Author(s):  
Fanrong Kong ◽  
Jianhui Jiang ◽  
Yan Huang

As a powerful tool in optimization, particle swarm optimizers have been widely applied to many different optimization areas and drawn much attention. However, for large-scale optimization problems, the algorithms exhibit poor ability to pursue satisfactory results due to the lack of ability in diversity maintenance. In this paper, an adaptive multi-swarm particle swarm optimizer is proposed, which adaptively divides a swarm into several sub-swarms and a competition mechanism is employed to select exemplars. In this way, on the one hand, the diversity of exemplars increases, which helps the swarm preserve the exploitation ability. On the other hand, the number of sub-swarms adaptively changes from a large value to a small value, which helps the algorithm make a suitable balance between exploitation and exploration. By employing several peer algorithms, we conducted comparisons to validate the proposed algorithm on a large-scale optimization benchmark suite of CEC 2013. The experiments results demonstrate the proposed algorithm is effective and competitive to address large-scale optimization problems.


Mathematics ◽  
2019 ◽  
Vol 7 (5) ◽  
pp. 414 ◽  
Author(s):  
Weian Guo ◽  
Lei Zhu ◽  
Lei Wang ◽  
Qidi Wu ◽  
Fanrong Kong

Diversity maintenance is crucial for particle swarm optimizer’s (PSO) performance. However, the update mechanism for particles in the conventional PSO is poor in the performance of diversity maintenance, which usually results in a premature convergence or a stagnation of exploration in the searching space. To help particle swarm optimization enhance the ability in diversity maintenance, many works have proposed to adjust the distances among particles. However, such operators will result in a situation where the diversity maintenance and fitness evaluation are conducted in the same distance-based space. Therefore, it also brings a new challenge in trade-off between convergence speed and diversity preserving. In this paper, a novel PSO is proposed that employs competitive strategy and entropy measurement to manage convergence operator and diversity maintenance respectively. The proposed algorithm was applied to the large-scale optimization benchmark suite on CEC 2013 and the results demonstrate the proposed algorithm is feasible and competitive to address large scale optimization problems.


2014 ◽  
Vol 2014 ◽  
pp. 1-16 ◽  
Author(s):  
Xiaobing Yu ◽  
Jie Cao ◽  
Haiyan Shan ◽  
Li Zhu ◽  
Jun Guo

Particle swarm optimization (PSO) and differential evolution (DE) are both efficient and powerful population-based stochastic search techniques for solving optimization problems, which have been widely applied in many scientific and engineering fields. Unfortunately, both of them can easily fly into local optima and lack the ability of jumping out of local optima. A novel adaptive hybrid algorithm based on PSO and DE (HPSO-DE) is formulated by developing a balanced parameter between PSO and DE. Adaptive mutation is carried out on current population when the population clusters around local optima. The HPSO-DE enjoys the advantages of PSO and DE and maintains diversity of the population. Compared with PSO, DE, and their variants, the performance of HPSO-DE is competitive. The balanced parameter sensitivity is discussed in detail.


Sign in / Sign up

Export Citation Format

Share Document