scholarly journals A modified Particle Swarm Optimization Algorithm to solve Time Minimization Transportation Problem

2020 ◽  
Vol 8 (5) ◽  
pp. 3686-3692

When the supply of items need urgent/earliest delivery to the destinations, the Time Minimization Transportation Problems (TMTPs) are indispensable. Traditionally these problems have been solved using the exact techniques, however, the (meta) heuristic techniques have provided a great breakthrough in search space exploration. Particle Swarm Optimization is one such meta-heuristic technique that has been applied on a wide variety of continuous optimization problems. For discrete problems, either the mathematical model of problem or the solution procedure has been changed. In this paper, the PSO has been modified to incorporate the discrete nature of variables and the non-linearity of the objective function. The proposed PSO is tested on the problems available in the literature and the optimal solutions are obtained efficiently. The exhaustive search capability of proposed PSO is established by obtaining alternate optimal solutions and the combinations of the allocated cells that are beyond ( ) m n   1 in number. This proposed solution technique, therefore, provides an effective alternate to the analytical techniques for decision making in the logistic systems.

2014 ◽  
Vol 2014 ◽  
pp. 1-23 ◽  
Author(s):  
Martins Akugbe Arasomwan ◽  
Aderemi Oluyinka Adewumi

A new local search technique is proposed and used to improve the performance of particle swarm optimization algorithms by addressing the problem of premature convergence. In the proposed local search technique, a potential particle position in the solution search space is collectively constructed by a number of randomly selected particles in the swarm. The number of times the selection is made varies with the dimension of the optimization problem and each selected particle donates the value in the location of its randomly selected dimension from its personal best. After constructing the potential particle position, some local search is done around its neighbourhood in comparison with the current swarm global best position. It is then used to replace the global best particle position if it is found to be better; otherwise no replacement is made. Using some well-studied benchmark problems with low and high dimensions, numerical simulations were used to validate the performance of the improved algorithms. Comparisons were made with four different PSO variants, two of the variants implement different local search technique while the other two do not. Results show that the improved algorithms could obtain better quality solution while demonstrating better convergence velocity and precision, stability, robustness, and global-local search ability than the competing variants.


2021 ◽  
Author(s):  
Ahlem Aboud ◽  
Nizar Rokbani ◽  
Seyedali Mirjalili ◽  
Abdulrahman M. Qahtani ◽  
Omar Almutiry ◽  
...  

<p>Multifactorial Optimization (MFO) and Evolutionary Transfer Optimization (ETO) are new optimization challenging paradigms for which the multi-Objective Particle Swarm Optimization system (MOPSO) may be interesting despite limitations. MOPSO has been widely used in static/dynamic multi-objective optimization problems, while its potentials for multi-task optimization are not completely unveiled. This paper proposes a new Distributed Multifactorial Particle Swarm Optimization algorithm (DMFPSO) for multi-task optimization. This new system has a distributed architecture on a set of sub-swarms that are dynamically constructed based on the number of optimization tasks affected by each particle skill factor. DMFPSO is designed to deal with the issues of handling convergence and diversity concepts separately. DMFPSO uses Beta function to provide two optimized profiles with a dynamic switching behaviour. The first profile, Beta-1, is used for the exploration which aims to explore the search space toward potential solutions, while the second Beta-2 function is used for convergence enhancement. This new system is tested on 36 benchmarks provided by the CEC’2021 Evolutionary Transfer Multi-Objective Optimization Competition. Comparatives with the state-of-the-art methods are done using the Inverted General Distance (IGD) and Mean Inverted General Distance (MIGD) metrics. Based on the MSS metric, this proposal has the best results on most tested problems.</p>


2013 ◽  
Vol 2013 ◽  
pp. 1-12 ◽  
Author(s):  
Martins Akugbe Arasomwan ◽  
Aderemi Oluyinka Adewumi

Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted.


Author(s):  
Jenn-Long Liu ◽  

Particle swarm optimization (PSO) is a promising evolutionary approach related to a particle moves over the search space with velocity, which is adjusted according to the flying experiences of the particle and its neighbors, and flies towards the better and better search area over the course of search process. Although the PSO is effective in solving the global optimization problems, there are some crucial user-input parameters, such as cognitive and social learning rates, affect the performance of algorithm since the search process of a PSO algorithm is nonlinear and complex. Consequently, a PSO with well-selected parameter settings may result in good performance. This work develops an evolving PSO based on the Clerc’s PSO to evaluate the fitness of objective function and a genetic algorithm (GA) to evolve the optimal design parameters to provide the usage of PSO. The crucial design parameters studied herein include the cognitive and social learning rates as well as constriction factor for the Clerc’s PSO. Several benchmarking cases are experimented to generalize a set of optimal parameters via the evolving PSO. Furthermore, the better parameters are applied to the engineering optimization of a pressure vessel design.


2020 ◽  
Author(s):  
Ahlem Aboud ◽  
Raja Fdhila ◽  
Amir Hussain ◽  
Adel Alimi

Distributed architecture-based Particle Swarm Optimization is very useful for static optimization and not yet explored to solve complex dynamic multi-objective optimization problems. This study proposes a novel Dynamic Pareto bi-level Multi-Objective Particle Swarm Optimization (DPb-MOPSO) algorithm with two optimization levels. In the first level, all solutions are optimized in the same search space and the second level is based on a distributed architecture using the Pareto ranking operator for dynamic multi-swarm subdivision. The proposed approach adopts a dynamic handling strategy using a set of detectors to keep track of change in the objective function that is impacted by the problem’s time-varying parameters at each level. To ensure timely adaptation during the optimization process, a dynamic response strategy is considered for the reevaluation of all non-improved solutions, while the worst particles are replaced with a newly generated one. The convergence and<br>diversity performance of the DPb-MOPSO algorithm are proven through Friedman Analysis of Variance, and the Lyapunov theorem is used to prove stability analysis over the Inverted Generational Distance (IGD) and Hypervolume Difference (HVD) metrics. Compared to other evolutionary algorithms, the novel DPb-MOPSO is shown to be most robust for solving complex problems over a range of changes in both the Pareto Optimal Set and Pareto Optimal Front. <br>


2020 ◽  
Vol 53 (4) ◽  
pp. 559-566
Author(s):  
Lakhdar Kaddouri ◽  
Amel B.H. Adamou-Mitiche ◽  
Lahcene Mitiche

Particle Swarm Optimization (PSO) is an evolutionary algorithm widely used in optimization problems. It is characterized by a fast convergence, which can lead the algorithm to stagnate in local optima. In the present paper, a new Multi-PSO algorithm for the design of two-dimensional infinite impulse response (IIR) filters is built. It is based on the standard PSO and uses a new initialization strategy. This strategy is relayed to two types of swarms: a principal and auxiliaries. To improve the performance of the algorithm, the search space is divided into several areas, which allows a best covering and leading to a better exploration in each zone separately. This solved the problem of fast convergence in standard PSO. The results obtained demonstrate the effectiveness of the Multi-PSO algorithm in the filter coefficients optimization.


2021 ◽  
Author(s):  
Ahlem Aboud ◽  
Nizar Rokbani ◽  
Seyedali Mirjalili ◽  
Abdulrahman M. Qahtani ◽  
Omar Almutiry ◽  
...  

<p>Multifactorial Optimization (MFO) and Evolutionary Transfer Optimization (ETO) are new optimization challenging paradigms for which the multi-Objective Particle Swarm Optimization system (MOPSO) may be interesting despite limitations. MOPSO has been widely used in static/dynamic multi-objective optimization problems, while its potentials for multi-task optimization are not completely unveiled. This paper proposes a new Distributed Multifactorial Particle Swarm Optimization algorithm (DMFPSO) for multi-task optimization. This new system has a distributed architecture on a set of sub-swarms that are dynamically constructed based on the number of optimization tasks affected by each particle skill factor. DMFPSO is designed to deal with the issues of handling convergence and diversity concepts separately. DMFPSO uses Beta function to provide two optimized profiles with a dynamic switching behaviour. The first profile, Beta-1, is used for the exploration which aims to explore the search space toward potential solutions, while the second Beta-2 function is used for convergence enhancement. This new system is tested on 36 benchmarks provided by the CEC’2021 Evolutionary Transfer Multi-Objective Optimization Competition. Comparatives with the state-of-the-art methods are done using the Inverted General Distance (IGD) and Mean Inverted General Distance (MIGD) metrics. Based on the MSS metric, this proposal has the best results on most tested problems.</p>


2018 ◽  
Vol 6 (2) ◽  
pp. 129-142 ◽  
Author(s):  
Hasan Koyuncu ◽  
Rahime Ceylan

Abstract In the literature, most studies focus on designing new methods inspired by biological processes, however hybridization of methods and hybridization way should be examined carefully to generate more suitable optimization methods. In this study, we handle Particle Swarm Optimization (PSO) and an efficient operator of Artificial Bee Colony Optimization (ABC) to design an efficient technique for continuous function optimization. In PSO, velocity and position concepts guide particles to achieve convergence. At this point, variable and stable parameters are ineffective for regenerating awkward particles that cannot improve their personal best position (Pbest). Thus, the need for external intervention is inevitable once a useful particle becomes an awkward one. In ABC, the scout bee phase acts as external intervention by sustaining the resurgence of incapable individuals. With the addition of a scout bee phase to standard PSO, Scout Particle Swarm Optimization (ScPSO) is formed which eliminates the most important handicap of PSO. Consequently, a robust optimization algorithm is obtained. ScPSO is tested on constrained optimization problems and optimum parameter values are obtained for the general use of ScPSO. To evaluate the performance, ScPSO is compared with Genetic Algorithm (GA), with variants of the PSO and ABC methods, and with hybrid approaches based on PSO and ABC algorithms on numerical function optimization. As seen in the results, ScPSO results in better optimal solutions than other approaches. In addition, its convergence is superior to a basic optimization method, to the variants of PSO and ABC algorithms, and to the hybrid approaches on different numerical benchmark functions. According to the results, the Total Statistical Success (TSS) value of ScPSO ranks first (5) in comparison with PSO variants; the second best TSS (2) belongs to CLPSO and SP-PSO techniques. In a comparison with ABC variants, the best TSS value (6) is obtained by ScPSO, while TSS of BitABC is 2. In comparison with hybrid techniques, ScPSO obtains the best Total Average Rank (TAR) as 1.375, and TSS of ScPSO ranks first (6) again. The fitness values obtained by ScPSO are generally more satisfactory than the values obtained by other methods. Consequently, ScPSO achieve promising gains over other optimization methods; in parallel with this result, its usage can be extended to different working disciplines. Highlights PSO parameters are ineffective to regenerate the awkward particle that cannot improve its pbest. An external intervention is inevitable once a particle becomes an awkward one. ScPSO is obtained with the addition of scout bee phase into the PSO. So an evolutionary method eliminating the most important handicap of PSO is gained. ScPSO is compared with the variants and with hybrid versions of PSO and ABC methods. According to the experiments, ScPSO results in better optimal solutions. The fitness values of ScPSO are generally more satisfactory than the others. Consequently, ScPSO achieve promising gains over other optimization methods. In parallel with this, its usage can be extended to different working disciplines.


Sign in / Sign up

Export Citation Format

Share Document