Boundary Search for Constrained Numerical Optimization Problems in ACO Algorithms

Author(s):  
Guillermo Leguizamón ◽  
Carlos A. Coello Coello
2009 ◽  
Vol 26 (04) ◽  
pp. 479-502 ◽  
Author(s):  
BIN LIU ◽  
TEQI DUAN ◽  
YONGMING LI

In this paper, a novel genetic algorithm — dynamic ring-like agent genetic algorithm (RAGA) is proposed for solving global numerical optimization problem. The RAGA combines the ring-like agent structure and dynamic neighboring genetic operators together to get better optimization capability. An agent in ring-like agent structure represents a candidate solution to the optimization problem. Any agent interacts with neighboring agents to evolve. With dynamic neighboring genetic operators, they compete and cooperate with their neighbors, and they can also use knowledge to increase energies. Global numerical optimization problems are the most important ones to verify the performance of evolutionary algorithm, especially of genetic algorithm and are mostly of interest to the corresponding researchers. In the corresponding experiments, several complex benchmark functions were used for optimization, several popular GAs were used for comparison. In order to better compare two agents GAs (MAGA: multi-agent genetic algorithm and RAGA), the several dimensional experiments (from low dimension to high dimension) were done. These experimental results show that RAGA not only is suitable for optimization problems, but also has more precise and more stable optimization results.


1996 ◽  
Vol 4 (1) ◽  
pp. 1-32 ◽  
Author(s):  
Zbigniew Michalewicz ◽  
Marc Schoenauer

Evolutionary computation techniques have received a great deal of attention regarding their potential as optimization techniques for complex numerical functions. However, they have not produced a significant breakthrough in the area of nonlinear programming due to the fact that they have not addressed the issue of constraints in a systematic way. Only recently have several methods been proposed for handling nonlinear constraints by evolutionary algorithms for numerical optimization problems; however, these methods have several drawbacks, and the experimental results on many test cases have been disappointing. In this paper we (1) discuss difficulties connected with solving the general nonlinear programming problem; (2) survey several approaches that have emerged in the evolutionary computation community; and (3) provide a set of 11 interesting test cases that may serve as a handy reference for future methods.


Author(s):  
T. O. Ting ◽  
H. C. Ting ◽  
T. S. Lee

In this work, a hybrid Taguchi-Particle Swarm Optimization (TPSO) is proposed to solve global numerical optimization problems with continuous and discrete variables. This hybrid algorithm combines the well-known Particle Swarm Optimization Algorithm with the established Taguchi method, which has been an important tool for robust design. This paper presents the improvements obtained despite the simplicity of the hybridization process. The Taguchi method is run only once in every PSO iteration and therefore does not give significant impact in terms of computational cost. The method creates a more diversified population, which also contributes to the success of avoiding premature convergence. The proposed method is effectively applied to solve 13 benchmark problems. This study’s results show drastic improvements in comparison with the standard PSO algorithm involving continuous and discrete variables on high dimensional benchmark functions.


Sign in / Sign up

Export Citation Format

Share Document