Genetic Range Genetic Algorithms to Obtain Quasi-Optimum Solutions

Author(s):  
Masao Arakawa ◽  
Tomoyuki Miyashita ◽  
Hiroshi Ishikawa

In some cases of developing a new product, response surface of an objective function is not always single peaked function, and it is often multi-peaked function. In that case, designers would like to have not oniy global optimum solution but also as many local optimum solutions and/or quasi-optimum solutions as possible, so that he or she can select one out of them considering the other conditions that are not taken into account priori to optimization. Although this information is quite useful, it is not that easy to obtain with a single trial of optimization. In this study, we will propose a screening of fitness function in genetic algorithms (GA). Which change fitness function during searching. Therefore, GA needs to have higher flexibility in searching. Genetic Range Genetic Algorithms include a number of searching range in a single generation. Just like there are a number of species in wild life. Therefore, it can arrange to have both global searching range and also local searching range with different fitness function. In this paper, we demonstrate the effectiveness of the proposed method through a simple benchmark test problems.

Author(s):  
Masao Arakawa ◽  
Hiroshi Yamakawa

Abstract In the design process, it would be much better to give designers more and more acceptable and reasonable design candidates to derive their preferences to meet requirements of multiple high performances. In this paper, we will propose a new method, to give such acceptable design candidates based on genetic algorithms (GAs) with considering something like strategy for adaptation. GAs are search algorithms based on the mechanics of natural selection and natural genetics. Yet, they are no simple random walk but they efficiently exploit historical information to speculate on new search points with expected improved performance. Thus, we expect in GAs to give multiple acceptable and near optimum design candidates just like so many species in natural living things. However, in simple GAs, after specific number of generations, their populations become near one or two specific local optimum solutions. (Hoping for including global optimum solution.) In order to obtain multiple acceptable solutions, we need to perform GAs processes by keeping variations in characters of individuals. The proposed method tries to find multiple acceptable solutions by considering strategy for adaptation into GAs processes, which are food chain, strategy of foraging, death strategy and strategy of reproduction. As a numerical example, we apply the proposed method to simple multi-objective optimization and demonstrate its efficiencies.


2016 ◽  
pp. 450-475
Author(s):  
Dipti Singh ◽  
Kusum Deep

Due to their wide applicability and easy implementation, Genetic algorithms (GAs) are preferred to solve many optimization problems over other techniques. When a local search (LS) has been included in Genetic algorithms, it is known as Memetic algorithms. In this chapter, a new variant of single-meme Memetic Algorithm is proposed to improve the efficiency of GA. Though GAs are efficient at finding the global optimum solution of nonlinear optimization problems but usually converge slow and sometimes arrive at premature convergence. On the other hand, LS algorithms are fast but are poor global searchers. To exploit the good qualities of both techniques, they are combined in a way that maximum benefits of both the approaches are reaped. It lets the population of individuals evolve using GA and then applies LS to get the optimal solution. To validate our claims, it is tested on five benchmark problems of dimension 10, 30 and 50 and a comparison between GA and MA has been made.


Author(s):  
Dipti Singh ◽  
Kusum Deep

Due to their wide applicability and easy implementation, Genetic algorithms (GAs) are preferred to solve many optimization problems over other techniques. When a local search (LS) has been included in Genetic algorithms, it is known as Memetic algorithms. In this chapter, a new variant of single-meme Memetic Algorithm is proposed to improve the efficiency of GA. Though GAs are efficient at finding the global optimum solution of nonlinear optimization problems but usually converge slow and sometimes arrive at premature convergence. On the other hand, LS algorithms are fast but are poor global searchers. To exploit the good qualities of both techniques, they are combined in a way that maximum benefits of both the approaches are reaped. It lets the population of individuals evolve using GA and then applies LS to get the optimal solution. To validate our claims, it is tested on five benchmark problems of dimension 10, 30 and 50 and a comparison between GA and MA has been made.


2006 ◽  
Vol 12 (1) ◽  
pp. 89-115 ◽  
Author(s):  
Hideaki Suzuki ◽  
Hidefumi Sawai ◽  
Wojciech Piaseczny

A chemical genetic algorithm (CGA) in which several types of molecules (information units) react with each other in a cell is proposed. Not only the information in DNA, but also smaller molecules responsible for the transcription and translation of DNA into amino acids, are adaptively changed during evolution, which optimizes the fundamental mapping from binary substrings in DNA (genotype) to real values for a parameter set (phenotype). Through the struggle between cells containing a DNA unit and small molecular units, the codes (DNA) and the interpreter (the small molecular units) coevolve, and a specific output function, from which a cell's fitness is evaluated, is optimized. To demonstrate the effectiveness of the CGA, it is applied to a set of variable-separable and variable-inseparable problems, and it is shown that the CGA can robustly solve a wide range of optimization problems regardless of their fitness characteristics. To ascertain the optimization of the genotypeto-phenotype mapping by the CGA, we also conduct analytical experiments for some problems while observing the basin size of a global optimum solution in the binary genotype space. The results show that the CGA effectively augments the basin size, makes it easier for evolution to find a path to the global optimum solution, and enhances the GA's evolvability during evolution.


Author(s):  
Foo Fong Yeng ◽  
Soo Kum Yoke ◽  
Azrina Suhaimi

Genetic Algorithm is an algorithm imitating the natural evolution process in solving optimization problems. All feasible (candidate) solutions would be encoded into chromosomes and undergo the execution of genetic operators in evolution. The evolution itself is a process searching for optimum solution. The searching would stop when a stopping criterion is met. Then, the fittest chromosome of last generation is declared as the optimum solution. However, this optimum solution might be a local optimum or a global optimum solution. Hence, an appropriate stopping criterion is important such that the search is not ended before a global optimum solution is found. In this paper, saturation of population fitness is proposed as a stopping criterion for ending the search. The proposed stopping criteria was compared with conventional stopping criterion, fittest chromosomes repetition, under various parameters setting. The results show that the performance of proposed stopping criterion is superior as compared to the conventional stopping criterion.


2013 ◽  
Vol 51 ◽  
Author(s):  
Anisa Waganda Ragalo

This paper proposes Polyandry, a new nature-inspired modification to canonical Genetic Programming (GP). Polyandry aims to improve evolvability in GP. Evolvability is a critically important GP trait, the maintenance of which determines the arrival of the GP at the global optimum solution. Specifically evolvability is defined as the ability of the genetic operators employed in GP to produce offspring that are fitter than their parents. When GP fails to exhibit evolvability, further adaptation of the GP individuals towards the global optimum solution becomes impossible. Polyandry improves evolvability by improving the typically disruptive standard GP crossover operator. The algorithm employs a dual strategy towards this goal. The chief part of this strategy is an incorporation of genetic material from multiple mating partners into broods of offspring. Given such a brood, the offspring in the brood then compete according to a culling function, which we make equivalent to the main GP fitness function. Polyandry’s incorporation of genetic material from multiple GP individuals into broods of offspring represents a more aggressive search for building block information. This characteristic of the algorithm leads to an advanced explorative capability in both GP structural space and fitness space. The second component of the Polyandry strategy is an attempt at multiple crossover points, in order to find crossover points that minimize building block disruption from parents to offspring. This strategy is employed by a similar algorithm, Brood Recombination. We conduct experiments to compare Polyandry with the canonical GP. Our experiments demonstrate that Polyandry consistently exhibits better evolvability than the canonical GP. As a consequence, Polyandry achieves higher success rates and finds solutions faster than the latter. The result of these observations is that given certain brood size settings, Polyandry requires less computational effort to arrive at global optimum solution than the canonical GP. We also conduct experiments to compare Polyandry with the analogous nature-inspired modification to canonical GP, Brood Recombination. The adoption of Brood Recombination in order to improve evolvability is ubiquitous in GP literature. Our results demonstrate that Polyandry consistently exhibits better evolvability than Brood Recombination, due to a more explorative nature of the algorithm in both structural and fitness space. As a result, although the two algorithms exhibit similar success rates, the former consistently discovers global optimum GP solutions significantly faster than the latter. The key advantage of Polyandry over Brood Recombination is therefore faster solution discovery. As a consequence Polyandry consistently requires less computational effort to arrive at the global optimum solution compared to Brood Recombination. Further, we establish that the computational effort exerted by Polyandry is competitively low, relative to other Evolutionary Algorithm (EA) methodologies in literature. We conclude that Polyandry is a better alternative to both the canonical GP as well as Brood Recombination with regards to the achievement and maintenance of evolvability.


2013 ◽  
Vol 421 ◽  
pp. 507-511 ◽  
Author(s):  
Nurezayana Zainal ◽  
Azlan Mohd Zain ◽  
Nor Haizan Mohamed Radzi ◽  
Amirmudin Udin

Glowworm Swarm Optimization (GSO) algorithm is a derivative-free, meta-heuristic algorithm and mimicking the glow behavior of glowworms which can efficiently capture all the maximum multimodal function. Nevertheless, there are several weaknesses to locate the global optimum solution for instance low calculation accuracy, simply falling into the local optimum, convergence rate of success and slow speed to converge. This paper reviews the exposition of a new method of swarm intelligence in solving optimization problems using GSO. Recently the GSO algorithm was used simultaneously to find solutions of multimodal function optimization problem in various fields in today industry such as science, engineering, network and robotic. From the paper review, we could conclude that the basic GSO algorithm, GSO with modification or improvement and GSO with hybridization are considered by previous researchers in order to solve the optimization problem. However, based on the literature review, many researchers applied basic GSO algorithm in their research rather than others.


2015 ◽  
Vol 713-715 ◽  
pp. 1491-1494 ◽  
Author(s):  
Zhi Qiang Gao ◽  
Li Xia Liu ◽  
Wei Wei Kong ◽  
Xiao Hong Wang

A novel composite framework of Cuckoo Search (CS) and Particle Swarm Optimization (PSO) algorithm called CS-PSO is proposed in this paper. In CS-PSO, initialization is substituted by chaotic system, and then Cuckoo shares optimums in the global best solutions pool with particles in PSO to improve parallel cooperation and social interaction. Furthermore, Cloud Model, famous for its outstanding characteristics of the process of transforming qualitative concepts to a set of quantitative numerical values, is adopted to exploit the surrounding of the local solutions obtained from the global best solution pool. Benchmark test results show that, CS-PSO can converge to the global optimum solution rapidly and accurately, compared with other algorithms, especially in high dimensional problems.


F1000Research ◽  
2013 ◽  
Vol 2 ◽  
pp. 139
Author(s):  
Maxinder S Kanwal ◽  
Avinash S Ramesh ◽  
Lauren A Huang

The fields of molecular biology and neurobiology have advanced rapidly over the last two decades. These advances have resulted in the development of large proteomic and genetic databases that need to be searched for the prediction, early detection and treatment of neuropathologies and other genetic disorders. This need, in turn, has pushed the development of novel computational algorithms that are critical for searching genetic databases. One successful approach has been to use artificial intelligence and pattern recognition algorithms, such as neural networks and optimization algorithms (e.g. genetic algorithms). The focus of this paper is on optimizing the design of genetic algorithms by using an adaptive mutation rate based on the fitness function of passing generations. We propose a novel pseudo-derivative based mutation rate operator designed to allow a genetic algorithm to escape local optima and successfully continue to the global optimum. Once proven successful, this algorithm can be implemented to solve real problems in neurology and bioinformatics. As a first step towards this goal, we tested our algorithm on two 3-dimensional surfaces with multiple local optima, but only one global optimum, as well as on the N-queens problem, an applied problem in which the function that maps the curve is implicit. For all tests, the adaptive mutation rate allowed the genetic algorithm to find the global optimal solution, performing significantly better than other search methods, including genetic algorithms that implement fixed mutation rates.


2018 ◽  
Vol XIX (1) ◽  
pp. 393-399
Author(s):  
Maniu R

The size of the chromosome population is an essential parameter of genetic algorithms. A large population involves a large amount of calculations but provides a complete scroll of the search space and the increased probability of generating a global optimum. A small population size, through the small number of operations required, causes a quick run of the algorithm, with increasing the probability of detecting a local optimum to the detriment of the global one. This paper proposes the use of an adaptive, variable size of chromosome population. We will demonstrate that this approach leads to an acceleration of the algorithm operation, without having a negative impact on the quality of provided solutions.


Sign in / Sign up

Export Citation Format

Share Document