scholarly journals Dynamic Subset Selection Based on a Fitness Case Topology

2004 ◽  
Vol 12 (2) ◽  
pp. 223-242 ◽  
Author(s):  
Christian W.G. Lasarczyk ◽  
Peter Dittrich ◽  
Wolfgang Banzhaf

A large training set of fitness cases can critically slow down genetic programming, if no appropriate subset selection method is applied. Such a method allows an individual to be evaluated on a smaller subset of fitness cases. In this paper we suggest a new subset selection method that takes the problem structure into account, while being problem independent at the same time. In order to achieve this, information about the problem structure is acquired during evolutionary search by creating a topology (relationship) on the set of fitness cases. The topology is induced by individuals of the evolving population. This is done by increasing the strength of the relation between two fitness cases, if an individual of the population is able to solve both of them. Our new topology—based subset selection method chooses a subset, such that fitness cases in this subset are as distantly related as is possible with respect to the induced topology. We compare topology—based selection of fitness cases with dynamic subset selection and stochastic subset sampling on four different problems. On average, runs with topology—based selection show faster progress than the others.

2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Xu Yu ◽  
Miao Yu ◽  
Li-xun Xu ◽  
Jing Yang ◽  
Zhi-qiang Xie

The assumption that the training and testing samples are drawn from the same distribution is violated under covariate shift setting, and most algorithms for the covariate shift setting try to first estimate distributions and then reweight samples based on the distributions estimated. Due to the difficulty of estimating a correct distribution, previous methods can not get good classification performance. In this paper, we firstly present two types of covariate shift problems. Rather than estimating the distributions, we then desire an effective method to select a maximum subset following the target testing distribution based on feature space split from the auxiliary set or the target training set. Finally, we prove that our subset selection method can consistently deal with both scenarios of covariate shift. Experimental results demonstrate that training a classifier with the selected maximum subset exhibits good generalization ability and running efficiency over those of traditional methods under covariate shift setting.


Author(s):  
Andrew F. Zahrt ◽  
Brennan T. Rose ◽  
William T. Darrow ◽  
Jeremy J. Henle ◽  
Scott E. Denmark

Different subset selection methods are examined to guide catalyst selection in optimization campaigns. Error assessment methods are used to quantitatively inform selection of new catalyst candidates from in silico libraries of catalyst structures.


2003 ◽  
Vol 11 (2) ◽  
pp. 169-206 ◽  
Author(s):  
Riccardo Poli ◽  
Nicholas Freitag McPhee

This paper is the second part of a two-part paper which introduces a general schema theory for genetic programming (GP) with subtree-swapping crossover (Part I (Poli and McPhee, 2003)). Like other recent GP schema theory results, the theory gives an exact formulation (rather than a lower bound) for the expected number of instances of a schema at the next generation. The theory is based on a Cartesian node reference system, introduced in Part I, and on the notion of a variable-arity hyperschema, introduced here, which generalises previous definitions of a schema. The theory includes two main theorems describing the propagation of GP schemata: a microscopic and a macroscopic schema theorem. The microscopic version is applicable to crossover operators which replace a subtree in one parent with a subtree from the other parent to produce the offspring. Therefore, this theorem is applicable to Koza's GP crossover with and without uniform selection of the crossover points, as well as one-point crossover, size-fair crossover, strongly-typed GP crossover, context-preserving crossover and many others. The macroscopic version is applicable to crossover operators in which the probability of selecting any two crossover points in the parents depends only on the parents' size and shape. In the paper we provide examples, we show how the theory can be specialised to specific crossover operators and we illustrate how it can be used to derive other general results. These include an exact definition of effective fitness and a size-evolution equation for GP with subtree-swapping crossover.


2021 ◽  
Author(s):  
Ziyi Yang ◽  
Kun Fang ◽  
Zhiqiang Dan ◽  
Qiang Li ◽  
Zhipeng Wang ◽  
...  

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 517-527 ◽  
Author(s):  
Xiaoyan Luo ◽  
Zhiqi Shen ◽  
Rui Xue ◽  
Han Wan

2014 ◽  
Vol 1061-1062 ◽  
pp. 974-977
Author(s):  
Shi Hua Liu ◽  
Xian Gang Liu ◽  
Zhi Jian Sun

A skywave radar adaptive frequency selection method based on the preliminary criterion and the weighted criterion is presented. In this method, according to the various operational tasks, the frequency selection criterion is divided into the preliminary criterion and the weighted criterion based on the characteristic of the targets. The adaptive frequency selection of the skywave radar is achieved by the weighted computed of the frequency selection criterion. The feasibility and availability is demonstrated by an example.


Author(s):  
B. Samanta

Applications of genetic programming (GP) include many areas. However applications of GP in the area of machine condition monitoring and diagnostics is very recent and yet to be fully exploited. In this paper, a study is presented to show the performance of machine fault detection using GP. The time domain vibration signals of a rotating machine with normal and defective gears are processed for feature extraction. The extracted features from original and preprocessed signals are used as inputs to GP for two class (normal or fault) recognition. The number of features and the features are automatically selected in GP maximizing the classification success. The results of fault detection are compared with genetic algorithm (GA) based artificial neural network (ANN)- termed here as GA-ANN. The number of hidden nodes in the ANN and the selection of input features are optimized using GAs. Two different normalization schemes for the features have been used. For each trial, the GP and GA-ANN are trained with a subset of the experimental data for known machine conditions. The trained GP and GA-ANN are tested using the remaining set of data. The procedure is illustrated using the experimental vibration data of a gearbox. The results compare the effectiveness of both types of classifiers with GP and GA based selection of features.


Sign in / Sign up

Export Citation Format

Share Document