compact genetic algorithm
Recently Published Documents


TOTAL DOCUMENTS

111
(FIVE YEARS 14)

H-INDEX

14
(FIVE YEARS 2)

2021 ◽  
pp. 1-22
Author(s):  
Benjamin Doerr ◽  
Martin S. Krejca

Abstract In their recent work, Lehre and Nguyen (FOGA 2019) show that the univariate marginal distribution algorithm (UMDA) needs time exponential in the parent populations size to optimize the DeceptiveLeadingBlocks (DLB) problem. They conclude from this result that univariate EDAs have difficulties with deception and epistasis. In this work, we show that this negative finding is caused by the choice of the parameters of the UMDA. When the population sizes are chosen large enough to prevent genetic drift, then the UMDA optimizes the DLB problem with high probability with at most λ(n2+2elnn) fitness evaluations. Since an offspring population size λ of order n log n can prevent genetic drift, the UMDA can solve the DLB problem with O(n2) log n fitness evaluations. In contrast, for classic evolutionary algorithms no better run time guarantee than O(n3) is known (which we prove to be tight for the (1 + 1) EA), so our result rather suggests that the UMDA can cope well with deception and epistatis. From a broader perspective, our result shows that the UMDA can cope better with local optima than many classic evolutionary algorithms; such a result was previously known only for the compact genetic algorithm. Together with the lower bound of Lehre and Nguyen, our result for the first time rigorously proves that running EDAs in the regime with genetic drift can lead to drastic performance losses.


2021 ◽  
Vol 12 (2) ◽  
pp. 1-17
Author(s):  
Xingsi Xue ◽  
Xiaojing Wu ◽  
Junfeng Chen

Ontology provides a shared vocabulary of a domain by formally representing the meaning of its concepts, the properties they possess, and the relations among them, which is the state-of-the-art knowledge modeling technique. However, the ontologies in the same domain could differ in conceptual modeling and granularity level, which yields the ontology heterogeneity problem. To enable data and knowledge transfer, share, and reuse between two intelligent systems, it is important to bridge the semantic gap between the ontologies through the ontology matching technique. To optimize the ontology alignment’s quality, this article proposes an Interactive Compact Genetic Algorithm (ICGA)-based ontology matching technique, which consists of an automatic ontology matching process based on a Compact Genetic Algorithm (CGA) and a collaborative user validating process based on an argumentation framework. First, CGA is used to automatically match the ontologies, and when it gets stuck in the local optima, the collaborative validation based on the multi-relationship argumentation framework is activated to help CGA jump out of the local optima. In addition, we construct a discrete optimization model to define the ontology matching problem and propose a hybrid similarity measure to calculate two concepts’ similarity value. In the experiment, we test the performance of ICGA with the Ontology Alignment Evaluation Initiative’s interactive track, and the experimental results show that ICGA can effectively determine the ontology alignments with high quality.


Algorithmica ◽  
2020 ◽  
Author(s):  
Johannes Lengler ◽  
Dirk Sudholt ◽  
Carsten Witt

Abstract The compact Genetic Algorithm (cGA) evolves a probability distribution favoring optimal solutions in the underlying search space by repeatedly sampling from the distribution and updating it according to promising samples. We study the intricate dynamics of the cGA on the test function OneMax, and how its performance depends on the hypothetical population size K, which determines how quickly decisions about promising bit values are fixated in the probabilistic model. It is known that the cGA and the Univariate Marginal Distribution Algorithm (UMDA), a related algorithm whose population size is called $$\lambda$$ λ , run in expected time $$O(n \log n)$$ O ( n log n ) when the population size is just large enough ($$K = \varTheta (\sqrt{n}\log n)$$ K = Θ ( n log n ) and $$\lambda = \varTheta (\sqrt{n}\log n)$$ λ = Θ ( n log n ) , respectively) to avoid wrong decisions being fixated. The UMDA also shows the same performance in a very different regime ($$\lambda =\varTheta (\log n)$$ λ = Θ ( log n ) , equivalent to $$K = \varTheta (\log n)$$ K = Θ ( log n ) in the cGA) with much smaller population size, but for very different reasons: many wrong decisions are fixated initially, but then reverted efficiently. If the population size is even smaller ($$o(\log n)$$ o ( log n ) ), the time is exponential. We show that population sizes in between the two optimal regimes are worse as they yield larger runtimes: we prove a lower bound of $$\varOmega (K^{1/3}n + n \log n)$$ Ω ( K 1 / 3 n + n log n ) for the cGA on OneMax for $$K = O(\sqrt{n}/\log ^2 n)$$ K = O ( n / log 2 n ) . For $$K = \varOmega (\log ^3 n)$$ K = Ω ( log 3 n ) the runtime increases with growing K before dropping again to $$O(K\sqrt{n} + n \log n)$$ O ( K n + n log n ) for $$K = \varOmega (\sqrt{n} \log n)$$ K = Ω ( n log n ) . This suggests that the expected runtime for the cGA is a bimodal function in K with two very different optimal regions and worse performance in between.


Mathematics ◽  
2020 ◽  
Vol 8 (5) ◽  
pp. 758
Author(s):  
Andrea Ferigo ◽  
Giovanni Iacca

The ever-increasing complexity of industrial and engineering problems poses nowadays a number of optimization problems characterized by thousands, if not millions, of variables. For instance, very large-scale problems can be found in chemical and material engineering, networked systems, logistics and scheduling. Recently, Deb and Myburgh proposed an evolutionary algorithm capable of handling a scheduling optimization problem with a staggering number of variables: one billion. However, one important limitation of this algorithm is its memory consumption, which is in the order of 120 GB. Here, we follow up on this research by applying to the same problem a GPU-enabled “compact” Genetic Algorithm, i.e., an Estimation of Distribution Algorithm that instead of using an actual population of candidate solutions only requires and adapts a probabilistic model of their distribution in the search space. We also introduce a smart initialization technique and custom operators to guide the search towards feasible solutions. Leveraging the compact optimization concept, we show how such an algorithm can optimize efficiently very large-scale problems with millions of variables, with limited memory and processing power. To complete our analysis, we report the results of the algorithm on very large-scale instances of the OneMax problem.


Author(s):  
Soniya ◽  
Sandeep Paul ◽  
Lotika Singh

This paper applies a hybrid evolutionary approach to a convolutional neural network (CNN) and determines the number of layers and filters based on the application and user need. It integrates compact genetic algorithm with stochastic gradient descent (SGD) for simultaneously evolving structure and parameters of the CNN. It defines an effectual string representation for combining structure and parameters of the CNN. The compact genetic algorithm helps in the evolution of network structure by optimizing the number of convolutional layers and number of filters in each convolutional layer. At the same time, an optimal set of weight parameters of the network is obtained using the SGD law. This approach amalgamates exploration in network space by compact genetic algorithm and exploitation in weight space with SGD in an effective manner. The proposed approach also incorporates user-defined parameters in the cost function in an elegant manner which controls the network structure and hence the performance of the network based on the users need. The effectiveness of the proposed approach has been demonstrated on four benchmark datasets, namely MNIST, COIL-100, CIFAR-10 and CIFAR-100. The obtained results clearly demonstrate the potential of the proposed approach by evolving architectures based on the nature of the application and the need of the user.


2020 ◽  
Vol 3 (1) ◽  
Author(s):  
A. Maciel 1 ◽  
R. V. Vieira 2

This paper presents the process of adaptive filtering of cardiovascular disease signals from the processing and cleaning of ECG signals developed by the Compact Genetic Algorithm Based on Abstract Data Types (CGAADT), implemented in MATLAB using GPU/CUDA architecture from the examples of the base of MIT-BIH data. The results show that CGAADT can improve the filtering, cleaning, detection and diagnosis of arrhythmias using a single algorithm (CGAADT) in the adoption of a representation for the population with fixed size of chromosomes, pre-established by fragmentation of the GPU base when implemented in high performance systems, aiming to improve the health systems offered to patients with cardiovascular problems.


Sign in / Sign up

Export Citation Format

Share Document