Evaluation of Genetic Algorithm as Learning System in Rigid Space Interpretation

2016 ◽  
pp. 1184-1228 ◽  
Author(s):  
Bhupesh Kumar Singh

Genetic Algorithm (GA) (a structured framework of metaheauristics) has been used in various tasks such as search optimization and machine learning. Theoretically, there should be sound framework for genetic algorithms which can interpret/explain the various facts associated with it. There are various theories of the working of GA though all are subject to criticism. Hence an approach is being adopted that the legitimate theory of GA must be able to explain the learning process (a special case of the successive approximation) of GA. The analytical method of approximating some known function is expanding a complicated function an infinite series of terms containing some simpler (or otherwise useful) function. These infinite approximations facilitate the error to be made arbitrarily small by taking a progressive greater number of terms into consideration. The process of learning in an unknown environment, the form of function to be learned is known only by its form over the observation space. The problem of learning the possible form of the function is termed as experience problem. Various learning paradigms have ensured their legitimacy through the rigid space interpretation of the concentration of measure and Dvoretzky theorem. Hence it is being proposed that the same criterion should be applied to explain the learning capability of GA, various formalisms of explaining the working of GA should be evaluated by applying the criteria, and that learning capability can be used to demonstrate the probable capability of GA to perform beyond the limit cast by the No Free Lunch Theorem.

Author(s):  
Bhupesh Kumar Singh

Genetic Algorithm (GA) (a structured framework of metaheauristics) has been used in various tasks such as search optimization and machine learning. Theoretically, there should be sound framework for genetic algorithms which can interpret/explain the various facts associated with it. There are various theories of the working of GA though all are subject to criticism. Hence an approach is being adopted that the legitimate theory of GA must be able to explain the learning process (a special case of the successive approximation) of GA. The analytical method of approximating some known function is expanding a complicated function an infinite series of terms containing some simpler (or otherwise useful) function. These infinite approximations facilitate the error to be made arbitrarily small by taking a progressive greater number of terms into consideration. The process of learning in an unknown environment, the form of function to be learned is known only by its form over the observation space. The problem of learning the possible form of the function is termed as experience problem. Various learning paradigms have ensured their legitimacy through the rigid space interpretation of the concentration of measure and Dvoretzky theorem. Hence it is being proposed that the same criterion should be applied to explain the learning capability of GA, various formalisms of explaining the working of GA should be evaluated by applying the criteria, and that learning capability can be used to demonstrate the probable capability of GA to perform beyond the limit cast by the No Free Lunch Theorem.


Author(s):  
Lidong Wu

The No-Free-Lunch theorem is an interesting and important theoretical result in machine learning. Based on philosophy of No-Free-Lunch theorem, we discuss extensively on the limitation of a data-driven approach in solving NP-hard problems.


Author(s):  
William H. Hsu

A genetic algorithm (GA) is a method used to find approximate solutions to difficult search, optimization, and machine learning problems (Goldberg, 1989) by applying principles of evolutionary biology to computer science. Genetic algorithms use biologically-derived techniques such as inheritance, mutation, natural selection, and recombination. They are a particular class of evolutionary algorithms. Genetic algorithms are typically implemented as a computer simulation in which a population of abstract representations (called chromosomes) of candidate solutions (called individuals) to an optimization problem evolves toward better solutions. Traditionally, solutions are represented in binary as strings of 0s and 1s, but different encodings are also possible. The evolution starts from a population of completely random individuals and happens in generations. In each generation, multiple individuals are stochastically selected from the current population, modified (mutated or recombined) to form a new population, which becomes current in the next iteration of the algorithm.


Machine learning and artificial intelligence have evolved beyond simple hype and have integrated themselves in business and in popular conversation as an increasing number of smart applications profoundly transform the way we work and live. This article defines machine learning in terms of potential benefits and pitfalls for a nontechnical audience, and gives examples of popular and powerful machine learning algorithms: k-means clustering, principal component analysis, and artificial neural networks. Three important philosophical challenges of machine learning are introduced: the no free lunch theorem, the curse of dimensionality, and the bias–variance trade-off.


2016 ◽  
Vol 28 (1) ◽  
pp. 216-228 ◽  
Author(s):  
David Gómez ◽  
Alfonso Rojas

A sizable amount of research has been done to improve the mechanisms for knowledge extraction such as machine learning classification or regression. Quite unintuitively, the no free lunch (NFL) theorem states that all optimization problem strategies perform equally well when averaged over all possible problems. This fact seems to clash with the effort put forth toward better algorithms. This letter explores empirically the effect of the NFL theorem on some popular machine learning classification techniques over real-world data sets.


2011 ◽  
Vol 271-273 ◽  
pp. 818-822
Author(s):  
Hai Bin Shi ◽  
Yi Li

It’s a worthy research topic to use genetic algorithm for classification rules in data mining. In this paper, it was studied and researched in-depth. Firstly, we combined genetic algorithm and machine learning together, and then analyzed architecture of the genetic algorithm-based classification system, and also its development concrete structure was given. Secondly, we proposed a data classification rules learning system based on adaptive genetic algorithm, which can learn the classification rules accurately from the dataset. Finally the standard Play Tennis dataset was used for a closed test and after learning the system got three classification rules all with 100% accuracy rate, which fully demonstrated the feasibility of this algorithm.


2018 ◽  
Author(s):  
Steen Lysgaard ◽  
Paul C. Jennings ◽  
Jens Strabo Hummelshøj ◽  
Thomas Bligaard ◽  
Tejs Vegge

A machine learning model is used as a surrogate fitness evaluator in a genetic algorithm (GA) optimization of the atomic distribution of Pt-Au nanoparticles. The machine learning accelerated genetic algorithm (MLaGA) yields a 50-fold reduction of required energy calculations compared to a traditional GA.


Sign in / Sign up

Export Citation Format

Share Document