Reverse Search Particle Swarm Optimization to Deal with Dynamic Optimization Problems

2014 ◽  
Vol 571-572 ◽  
pp. 245-251
Author(s):  
Li Chen ◽  
Wei Jiang Wang ◽  
Lei Yao

Multiswarm approaches are used in many literatures to deal with dynamic optimization problems (DOPs). Each swarm tries to find promising areas where usually peaks lie and many good results have been obtained. However, steep peaks are difficult to be found with multiswarm approaches , which hinders the performance of the algorithm to be improved furtherly. Aiming at the bottleneck, the paper introduces the idea of sequential niche technique to traditional multiswarm approach and thus proposes a novel algorithm called reverse space search multiswarm particle swarm optimization (RSPSO) for DOPs. RSPSO uses the information of the peaks found by coarse search of traditional multiswarm approach to modify the original fitness function. A newly generated subswarm - reverse search subswarm evolves with the modified fitness function, at the same time, other subswarms using traditional mltiswarm approach still evolve. Two kinds of subswarm evolve in cooperation. Reverse search subswarm tends to find much steeper peak and so more promising area where peaks lie is explored. Elaborated experiments on MPB show the introduction of reverse search enhances the ability of finding peaks , the performance of RSPSO significantly outperforms traditional multiswarm approaches and it has better robustness to adapt to dynamic environment with wider-range change severity.

2014 ◽  
Vol 571-572 ◽  
pp. 232-236
Author(s):  
Li Chen ◽  
Ju Ming Liu ◽  
Jun Mei Ouyang

This paper proposes a multi-thread-memory particle swarm optimization (MTM-PSO) for dynamic optimization problems (DOPs). It introduces multi-thread memory to multi-swarm approaches to deal with DOPs. External memory and multi-swarm approaches are adopted. The best particles near the peaks in each environment are saved in the memory according to storage strategy of update and implement. Multi-thread memory makes full use of local and global optima found in each environment. It makes the good information in the previous evolution transfer to the current population adequately. Using the multi-thread memory and its storage strategy, the particles in the memory are always the best information of local and global optima found until in the current envionment. The information will benefit the future evolution. Experiments on the moving peak benchmark (MPB) for different numbers of peaks testiy MTM-PSO has better performance than dynamic optimization algorithms only with multi-swarm approach.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Dingcai Shen ◽  
Bei Qian ◽  
Min Wang

In the optimization of problems in dynamic environments, algorithms need to not only find the global optimal solutions in a specific environment but also to continuously track the moving optimal solutions over dynamic environments. To address this requirement, a species conservation-based particle swarm optimization (PSO), combined with a spatial neighbourhood best searching technique, is proposed. This algorithm employs a species conservation technique to save the found optima distributed in the search space, and these saved optima either transferred into the new population or replaced by the better individual within a certain distance in the subsequent evolution. The particles in the population are attracted by its history best and the optimal solution nearby based on the Euclidean distance other than the index-based. An experimental study is conducted based on the moving peaks benchmark to verify the performance of the proposed algorithm in comparison with several state-of-the-art algorithms widely used in dynamic optimization problems. The experimental results show the effectiveness and efficiency of the proposed algorithm for tracking the moving optima in dynamic environments.


Author(s):  
Hrvoje Markovic ◽  
◽  
Fangyan Dong ◽  
Kaoru Hirota

A parallel multi-population based metaheuristic optimization framework, called Concurrent Societies, inspired by human intellectual evolution, is proposed. It uses population based metaheuristics to evolve its populations, and fitness function approximations as representations of knowledge. By utilizing iteratively refined approximations it reduces the number of required evaluations and, as a byproduct, it produces models of the fitness function. The proposed framework is implemented as two Concurrent Societies: one based on genetic algorithm and one based on particle swarm optimization both using k -nearest neighbor regression as fitness approximation. The performance is evaluated on 10 standard test problems and compared to other commonly used metaheuristics. Results show that the usage of the framework considerably increases efficiency (by a factor of 7.6 to 977) and effectiveness (absolute error reduced by more than few orders of magnitude). The proposed framework is intended for optimization problems with expensive fitness functions, such as optimization in design and interactive optimization.


Sign in / Sign up

Export Citation Format

Share Document