A Comparison of the Fixed and Floating Building Block Representation in the Genetic Algorithm

1996 ◽  
Vol 4 (2) ◽  
pp. 169-193 ◽  
Author(s):  
Annie S. Wu ◽  
Robert K. Lindsay

This article compares the traditional, fixed problem representation style of a genetic algorithm (GA) with a new floating representation in which the building blocks of a problem are not fixed at specific locations on the individuals of the population. In addition, the effects of noncoding segments on both of these representations is studied. Noncoding segments are a computational model of noncoding deoxyribonucleic acid, and floating building blocks mimic the location independence of genes. The fact that these structures are prevalent in natural genetic systems suggests that they may provide some advantages to the evolutionary process. Our results show that there is a significant difference in how GAs solve a problem in the fixed and floating representations. Genetic algorithms are able to maintain a more diverse population with the floating representation. The combination of noncoding segments and floating building blocks appears to encourage a GA to take advantage of its parallel search and recombination abilities.

1999 ◽  
Vol 7 (4) ◽  
pp. 331-352 ◽  
Author(s):  
Dirk Thierens

Scalable evolutionary computation has. become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithm—namely elitism, niching, and restricted mating are not significantly improving the scalability problems.


2017 ◽  
Vol 25 (2) ◽  
pp. 237-274 ◽  
Author(s):  
Dirk Sudholt

We reinvestigate a fundamental question: How effective is crossover in genetic algorithms in combining building blocks of good solutions? Although this has been discussed controversially for decades, we are still lacking a rigorous and intuitive answer. We provide such answers for royal road functions and OneMax, where every bit is a building block. For the latter, we show that using crossover makes every ([Formula: see text]+[Formula: see text]) genetic algorithm at least twice as fast as the fastest evolutionary algorithm using only standard bit mutation, up to small-order terms and for moderate [Formula: see text] and [Formula: see text]. Crossover is beneficial because it can capitalize on mutations that have both beneficial and disruptive effects on building blocks: crossover is able to repair the disruptive effects of mutation in later generations. Compared to mutation-based evolutionary algorithms, this makes multibit mutations more useful. Introducing crossover changes the optimal mutation rate on OneMax from [Formula: see text] to [Formula: see text]. This holds both for uniform crossover and k-point crossover. Experiments and statistical tests confirm that our findings apply to a broad class of building block functions.


Proceedings ◽  
2019 ◽  
Vol 46 (1) ◽  
pp. 18
Author(s):  
Habib Izadkhah ◽  
Mahjoubeh Tajgardan

Software clustering is usually used for program comprehension. Since it is considered to be the most crucial NP-complete problem, several genetic algorithms have been proposed to solve this problem. In the literature, there exist some objective functions (i.e., fitness functions) which are used by genetic algorithms for clustering. These objective functions determine the quality of each clustering obtained in the evolutionary process of the genetic algorithm in terms of cohesion and coupling. The major drawbacks of these objective functions are the inability to (1) consider utility artifacts, and (2) to apply to another software graph such as artifact feature dependency graph. To overcome the existing objective functions’ limitations, this paper presents a new objective function. The new objective function is based on information theory, aiming to produce a clustering in which information loss is minimized. For applying the new proposed objective function, we have developed a genetic algorithm aiming to maximize the proposed objective function. The proposed genetic algorithm, named ILOF, has been compared to that of some other well-known genetic algorithms. The results obtained confirm the high performance of the proposed algorithm in solving nine software systems. The performance achieved is quite satisfactory and promising for the tested benchmarks.


Author(s):  
I Wayan Supriana

Knapsack problems is a problem that often we encounter in everyday life. Knapsack problem itself is a problem where a person faced with the problems of optimization on the selection of objects that can be inserted into the container which has limited space or capacity. Problems knapsack problem can be solved by various optimization algorithms, one of which uses a genetic algorithm. Genetic algorithms in solving problems mimicking the theory of evolution of living creatures. The components of the genetic algorithm is composed of a population consisting of a collection of individuals who are candidates for the solution of problems knapsack. The process of evolution goes dimulasi of the selection process, crossovers and mutations in each individual in order to obtain a new population. The evolutionary process will be repeated until it meets the criteria o f an optimum of the resulting solution. The problems highlighted in this research is how to resolve the problem by applying a genetic algorithm knapsack. The results obtained by the testing of the system is built, that the knapsack problem can optimize the placement of goods in containers or capacity available. Optimizing the knapsack problem can be maximized with the appropriate input parameters.


2001 ◽  
Vol 9 (1) ◽  
pp. 93-124 ◽  
Author(s):  
Eric B. Baum ◽  
Dan Boneh ◽  
Charles Garrett

We analyze the performance of a genetic algorithm (GA) we call Culling, and a variety of other algorithms, on a problem we refer to as the Additive Search Problem (ASP). We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Noisy ASP is the first problem we are aware of where a genetic-type algorithm bests all known competitors. We generalize ASP to k-ASP to study whether GAs will achieve “implicit parallelism” in a problem with many more schemata. GAs fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a mean field theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GAs can beat competing methods.


1999 ◽  
Vol 7 (2) ◽  
pp. 109-124 ◽  
Author(s):  
Chris Stephens ◽  
Henri Waelbroeck

In the light of a recently derived evolution equation for genetic algorithms we consider the schema theorem and the building block hypothesis. We derive a schema theorem based on the concept of effective fitness showing that schemata of higher than average effective fitness receive an exponentially increasing number of trials over time. The equation makes manifest the content of the building block hypothesis showing how fit schemata are constructed from fit sub-schemata. However, we show that, generically, there is no preference for short, low-order schemata. In the case where schema reconstruction is favored over schema destruction, large schemata tend to be favored. As a corollary of the evolution equation we prove Geiringer's theorem.


Author(s):  
Dirk Thierens ◽  
Mark De Berg

What makes a problem hard for a genetic algorithm (GA)? How does one need to design a GA to solve a problem satisfactorily? How does the designer include domain knowledge in the GA? When is a GA suitable to use for solving a problem? These are all legitimate questions. This chapter will offer a view on genetic algorithms that stresses the role of the so-called linkage. Linkage relates to the fact that between the variables of the solution dependencies exist that cause a need to treat those variables as one “block,” since the best setting of each individual variable can only be determined by looking at the other variables as well. The genes that represent these variables will then have to be transferred together. When these genes are set to their optimal values, they constitute a building block. Building blocks will be transferred as a whole during recombination and the building blocks of all the genes make up the optimal solution. As will become apparent, knowing the linkage of a building block is a big advantage and will allow one to design efficient GAs. Sadly, in the majority of problems, the linkage is unknown. This observation has given rise to a lot of development in linkage learning algorithms (for an example, see Kargupta 1996). However, there is a specific class of problems that allows for relatively easy determination of linkage: spatial problems. This is because in these problems, the linkage is geometrically defined. We will focus in this chapter on certain hard problems that arise in the context of geographical information systems and for which the linkage can be easily found. Specifically, we will fully detail the design of a GA for the problem of map labeling, which is an important problem in automated cartography. The map labeling problem for point features is to find a placement for the labels of a set of points such that the number of labels that do not intersect other labels is maximized.


2001 ◽  
Vol 9 (1) ◽  
pp. 71-92 ◽  
Author(s):  
John S. Gero ◽  
Vladimir Kazakov

We present an extension to the standard genetic algorithm (GA), which is based on concepts of genetic engineering. The motivation is to discover useful and harmful genetic materials and then execute an evolutionary process in such a way that the population becomes increasingly composed of useful genetic material and increasingly free of the harmful genetic material. Compared to the standard GA, it provides some computational advantages as well as a tool for automatic generation of hierarchical genetic representations specifically tailored to suit certain classes of problems.


Author(s):  
Bo Ping Wang ◽  
Jahau Lewis Chen

Abstract Genetic algorithms are adaptive procedures that find solutions to problems by an evolutionary process that mimics natural selection. In this paper, the use of genetic algorithms for the selection of optimal support locations of beams is presented. Both elastic and rigid supports are considered. The approach of adapting the genetic algorithms into the optimal design process is described This approach is used to optimize locations of three supports for beam with three types of boundary conditions.


2016 ◽  
Vol 8 (2) ◽  
pp. 99-113 ◽  
Author(s):  
Mahjoubeh Tajgardan ◽  
Habib Izadkhah ◽  
Shahriar Lotfi

AbstractSoftware clustering is usually used for program understanding. Since the software clustering is a NP-complete problem, a number of Genetic Algorithms (GAs) are proposed for solving this problem. In literature, there are two wellknown GAs for software clustering, namely, Bunch and DAGC, that use the genetic operators such as crossover and mutation to better search the solution space and generating better solutions during genetic algorithm evolutionary process. The major drawbacks of these operators are (1) the difficulty of defining operators, (2) the difficulty of determining the probability rate of these operators, and (3) do not guarantee to maintain building blocks. Estimation of Distribution (EDA) based approaches, by removing crossover and mutation operators and maintaining building blocks, can be used to solve the problems of genetic algorithms. This approach creates the probabilistic models from individuals to generate new population during evolutionary process, aiming to achieve more success in solving the problems. The aim of this paper is to recast EDA for software clustering problems, which can overcome the existing genetic operators’ limitations. For achieving this aim, we propose a new distribution probability function and a new EDA based algorithm for software clustering. To the best knowledge of the authors, EDA has not been investigated to solve the software clustering problem. The proposed EDA has been compared with two well-known genetic algorithms on twelve benchmarks. Experimental results show that the proposed approach provides more accurate results, improves the speed of convergence and provides better stability when compared against existing genetic algorithms such as Bunch and DAGC.


Sign in / Sign up

Export Citation Format

Share Document