Total Optimization of a Smart Community by Differential Evolutionary Particle Swarm Optimization Considering Reduction of Search Space

2017 ◽  
Vol 137 (9) ◽  
pp. 1266-1278 ◽  
Author(s):  
Mayuko Sato ◽  
Yoshikazu Fukuyama
Author(s):  
Ravichander Janapati ◽  
Ch. Balaswamy ◽  
K. Soundararajan

Localization is the key research area in wireless sensor networks. Finding the exact position of the node is known as localization. Different algorithms have been proposed. Here we consider a cooperative localization algorithm with censoring schemes using Crammer Rao bound (CRB). This censoring scheme  can improve the positioning accuracy and reduces computation complexity, traffic and latency. Particle swarm optimization (PSO) is a population based search algorithm based on the swarm intelligence like social behavior of birds, bees or a school of fishes. To improve the algorithm efficiency and localization precision, this paper presents an objective function based on the normal distribution of ranging error and a method of obtaining the search space of particles. In this paper  Distributed localization of wireless sensor networksis proposed using PSO with best censoring technique using CRB. Proposed method shows better results in terms of position accuracy, latency and complexity.  


2013 ◽  
Vol 2013 ◽  
pp. 1-7 ◽  
Author(s):  
Hongtao Ye ◽  
Wenguang Luo ◽  
Zhenqiang Li

This paper presents an analysis of the relationship of particle velocity and convergence of the particle swarm optimization. Its premature convergence is due to the decrease of particle velocity in search space that leads to a total implosion and ultimately fitness stagnation of the swarm. An improved algorithm which introduces a velocity differential evolution (DE) strategy for the hierarchical particle swarm optimization (H-PSO) is proposed to improve its performance. The DE is employed to regulate the particle velocity rather than the traditional particle position in case that the optimal result has not improved after several iterations. The benchmark functions will be illustrated to demonstrate the effectiveness of the proposed method.


2012 ◽  
Vol 2012 ◽  
pp. 1-21 ◽  
Author(s):  
S. Sakinah S. Ahmad ◽  
Witold Pedrycz

The study is concerned with data and feature reduction in fuzzy modeling. As these reduction activities are advantageous to fuzzy models in terms of both the effectiveness of their construction and the interpretation of the resulting models, their realization deserves particular attention. The formation of a subset of meaningful features and a subset of essential instances is discussed in the context of fuzzy-rule-based models. In contrast to the existing studies, which are focused predominantly on feature selection (namely, a reduction of the input space), a position advocated here is that a reduction has to involve both data and features to become efficient to the design of fuzzy model. The reduction problem is combinatorial in its nature and, as such, calls for the use of advanced optimization techniques. In this study, we use a technique of particle swarm optimization (PSO) as an optimization vehicle of forming a subset of features and data (instances) to design a fuzzy model. Given the dimensionality of the problem (as the search space involves both features and instances), we discuss a cooperative version of the PSO along with a clustering mechanism of forming a partition of the overall search space. Finally, a series of numeric experiments using several machine learning data sets is presented.


2021 ◽  
Author(s):  
Ahlem Aboud ◽  
Nizar Rokbani ◽  
Seyedali Mirjalili ◽  
Abdulrahman M. Qahtani ◽  
Omar Almutiry ◽  
...  

<p>Multifactorial Optimization (MFO) and Evolutionary Transfer Optimization (ETO) are new optimization challenging paradigms for which the multi-Objective Particle Swarm Optimization system (MOPSO) may be interesting despite limitations. MOPSO has been widely used in static/dynamic multi-objective optimization problems, while its potentials for multi-task optimization are not completely unveiled. This paper proposes a new Distributed Multifactorial Particle Swarm Optimization algorithm (DMFPSO) for multi-task optimization. This new system has a distributed architecture on a set of sub-swarms that are dynamically constructed based on the number of optimization tasks affected by each particle skill factor. DMFPSO is designed to deal with the issues of handling convergence and diversity concepts separately. DMFPSO uses Beta function to provide two optimized profiles with a dynamic switching behaviour. The first profile, Beta-1, is used for the exploration which aims to explore the search space toward potential solutions, while the second Beta-2 function is used for convergence enhancement. This new system is tested on 36 benchmarks provided by the CEC’2021 Evolutionary Transfer Multi-Objective Optimization Competition. Comparatives with the state-of-the-art methods are done using the Inverted General Distance (IGD) and Mean Inverted General Distance (MIGD) metrics. Based on the MSS metric, this proposal has the best results on most tested problems.</p>


2020 ◽  
Vol 21 (1) ◽  
Author(s):  
Zhaojuan Zhang ◽  
Wanliang Wang ◽  
Ruofan Xia ◽  
Gaofeng Pan ◽  
Jiandong Wang ◽  
...  

Abstract Background Reconstructing ancestral genomes is one of the central problems presented in genome rearrangement analysis since finding the most likely true ancestor is of significant importance in phylogenetic reconstruction. Large scale genome rearrangements can provide essential insights into evolutionary processes. However, when the genomes are large and distant, classical median solvers have failed to adequately address these challenges due to the exponential increase of the search space. Consequently, solving ancestral genome inference problems constitutes a task of paramount importance that continues to challenge the current methods used in this area, whose difficulty is further increased by the ongoing rapid accumulation of whole-genome data. Results In response to these challenges, we provide two contributions for ancestral genome inference. First, an improved discrete quantum-behaved particle swarm optimization algorithm (IDQPSO) by averaging two of the fitness values is proposed to address the discrete search space. Second, we incorporate DCJ sorting into the IDQPSO (IDQPSO-Median). In comparison with the other methods, when the genomes are large and distant, IDQPSO-Median has the lowest median score, the highest adjacency accuracy, and the closest distance to the true ancestor. In addition, we have integrated our IDQPSO-Median approach with the GRAPPA framework. Our experiments show that this new phylogenetic method is very accurate and effective by using IDQPSO-Median. Conclusions Our experimental results demonstrate the advantages of IDQPSO-Median approach over the other methods when the genomes are large and distant. When our experimental results are evaluated in a comprehensive manner, it is clear that the IDQPSO-Median approach we propose achieves better scalability compared to existing algorithms. Moreover, our experimental results by using simulated and real datasets confirm that the IDQPSO-Median, when integrated with the GRAPPA framework, outperforms other heuristics in terms of accuracy, while also continuing to infer phylogenies that were equivalent or close to the true trees within 5 days of computation, which is far beyond the difficulty level that can be handled by GRAPPA.


2013 ◽  
Vol 2013 ◽  
pp. 1-12 ◽  
Author(s):  
Martins Akugbe Arasomwan ◽  
Aderemi Oluyinka Adewumi

Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted.


Author(s):  
Jenn-Long Liu ◽  

Particle swarm optimization (PSO) is a promising evolutionary approach related to a particle moves over the search space with velocity, which is adjusted according to the flying experiences of the particle and its neighbors, and flies towards the better and better search area over the course of search process. Although the PSO is effective in solving the global optimization problems, there are some crucial user-input parameters, such as cognitive and social learning rates, affect the performance of algorithm since the search process of a PSO algorithm is nonlinear and complex. Consequently, a PSO with well-selected parameter settings may result in good performance. This work develops an evolving PSO based on the Clerc’s PSO to evaluate the fitness of objective function and a genetic algorithm (GA) to evolve the optimal design parameters to provide the usage of PSO. The crucial design parameters studied herein include the cognitive and social learning rates as well as constriction factor for the Clerc’s PSO. Several benchmarking cases are experimented to generalize a set of optimal parameters via the evolving PSO. Furthermore, the better parameters are applied to the engineering optimization of a pressure vessel design.


2017 ◽  
Vol 29 (1) ◽  
pp. 127-142
Author(s):  
Rkia Fajr ◽  
Abdelaziz Bouroumi

Abstract This paper introduces a new variant of the particle swarm optimization (PSO) algorithm, designed for global optimization of multidimensional functions. The goal of this variant, called ImPSO, is to improve the exploration and exploitation abilities of the algorithm by introducing a new operation in the iterative search process. The use of this operation is governed by a stochastic rule that ensures either the exploration of new regions of the search space or the exploitation of good intermediate solutions. The proposed method is inspired by collaborative human learning and uses as a starting point a basic PSO variant with constriction factor and velocity clamping. Simulation results that show the ability of ImPSO to locate the global optima of multidimensional functions are presented for 10 well-know benchmark functions from CEC-2013 and CEC-2005. These results are compared with the PSO variant used as starting point, three other PSO variants, one of which is based on human learning strategies, and three alternative evolutionary computing methods.


Sign in / Sign up

Export Citation Format

Share Document