Parallel quantum evolutionary algorithm based on chaotic searching technique for multi-modal function optimization

Author(s):  
Xiao-ming You ◽  
Xing-wai Miao ◽  
Sheng Liu
2011 ◽  
Vol 2011 ◽  
pp. 1-12 ◽  
Author(s):  
Lhassane Idoumghar ◽  
Mahmoud Melkemi ◽  
René Schott ◽  
Maha Idrissi Aouad

The paper presents a novel hybrid evolutionary algorithm that combines Particle Swarm Optimization (PSO) and Simulated Annealing (SA) algorithms. When a local optimal solution is reached with PSO, all particles gather around it, and escaping from this local optima becomes difficult. To avoid premature convergence of PSO, we present a new hybrid evolutionary algorithm, called HPSO-SA, based on the idea that PSO ensures fast convergence, while SA brings the search out of local optima because of its strong local-search ability. The proposed HPSO-SA algorithm is validated on ten standard benchmark multimodal functions for which we obtained significant improvements. The results are compared with these obtained by existing hybrid PSO-SA algorithms. In this paper, we provide also two versions of HPSO-SA (sequential and distributed) for minimizing the energy consumption in embedded systems memories. The two versions, of HPSO-SA, reduce the energy consumption in memories from 76% up to 98% as compared to Tabu Search (TS). Moreover, the distributed version of HPSO-SA provides execution time saving of about 73% up to 84% on a cluster of 4 PCs.


2021 ◽  
pp. 1-34
Author(s):  
Joost Huizinga ◽  
Jeff Clune

Abstract An important challenge in reinforcement learning is to solve multimodal problems, where agents have to act in qualitatively different ways depending on the circumstances. Because multimodal problems are often too difficult to solve directly, it is often helpful to define a curriculum, which is an ordered set of sub-tasks that can serve as the stepping stones for solving the overall problem. Unfortunately, choosing an effective ordering for these subtasks is difficult, and a poor ordering can reduce the performance of the learning process. Here, we provide a thorough introduction and investigation of the Combinatorial Multi-Objective Evolutionary Algorithm (CMOEA), which allows all combinations of subtasks to be explored simultaneously. We compare CMOEA against three algorithms that can similarly optimize on multiple subtasks simultaneously: NSGA-II, NSGA-III and ϵ-Lexicase Selection. The algorithms are tested on a function-optimization problem with two subtasks, a simulated multimodal robot locomotion problem with six subtasks and a simulated robot maze navigation problem where a hundred random mazes are treated as subtasks. On these problems, CMOEA either outperforms or is competitive with the controls. As a separate contribution, we show that adding a linear combination over all objectives can improve the ability of the control algorithms to solve these multimodal problems. Lastly, we show that CMOEA can leverage auxiliary objectives more effectively than the controls on the multimodal locomotion task. In general, our experiments suggest that CMOEA is a promising algorithm for solving multimodal problems.


2021 ◽  
Vol 6 (4 (114)) ◽  
pp. 6-14
Author(s):  
Maan Afathi

The main purpose of using the hybrid evolutionary algorithm is to reach optimal values and achieve goals that traditional methods cannot reach and because there are different evolutionary computations, each of them has different advantages and capabilities. Therefore, researchers integrate more than one algorithm into a hybrid form to increase the ability of these algorithms to perform evolutionary computation when working alone. In this paper, we propose a new algorithm for hybrid genetic algorithm (GA) and particle swarm optimization (PSO) with fuzzy logic control (FLC) approach for function optimization. Fuzzy logic is applied to switch dynamically between evolutionary algorithms, in an attempt to improve the algorithm performance. The HEF hybrid evolutionary algorithms are compared to GA, PSO, GAPSO, and PSOGA. The comparison uses a variety of measurement functions. In addition to strongly convex functions, these functions can be uniformly distributed or not, and are valuable for evaluating our approach. Iterations of 500, 1000, and 1500 were used for each function. The HEF algorithm’s efficiency was tested on four functions. The new algorithm is often the best solution, HEF accounted for 75 % of all the tests. This method is superior to conventional methods in terms of efficiency


Author(s):  
Kazuyuki Masutomi ◽  
◽  
Yuichi Nagata ◽  
Isao Ono ◽  
◽  
...  

This paper presents an evolutionary algorithm for Black-Box Chance-Constrained Function Optimization (BBCCFO). BBCCFO is to minimize the expectation of the objective function under the constraints that the feasibility probability is higher than a userdefined constant in uncertain environments not given the mathematical expressions of objective functions and constraints explicitly. In BBCCFO, only objective function values of solutions and their feasibilities are available because the algebra expressions of objective functions and constraints cannot be used. In approaches to BBCCFO, a method based on an evolutionary algorithm proposed by Loughlin and Ranjithan shows relatively good performance in a realworld application, but this conventional method has a problem in that it requires many samples to obtain a good solution because it estimates the expectation of the objective function and the feasibility probability of an individual by sampling the individual plural times. In this paper, we propose a new evolutionary algorithm that estimates the expectation of the objective function and the feasibility probability of an individual by using the other individuals in the neighborhood of the individual. We show the effectiveness of the proposed method through experiments both in benchmark problems and in the problem of a inverted pendulum balancing with a neural network controller.


Sign in / Sign up

Export Citation Format

Share Document