Univariate marginal distribution algorithms for non-stationary optimization problems

Author(s):  
Ashish Ghosh ◽  
Heinz Muehlenbein
Computation ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 18 ◽  
Author(s):  
Abdel-Rahman Hedar ◽  
Amira Allam ◽  
Alaa Abdel-Hakim

With the rapid growth of simulation software packages, generating practical tools for simulation-based optimization has attracted a lot of interest over the last decades. In this paper, a modified method of Estimation of Distribution Algorithms (EDAs) is constructed by a combination with variable-sample techniques to deal with simulation-based optimization problems. Moreover, a new variable-sample technique is introduced to support the search process whenever the sample sizes are small, especially in the beginning of the search process. The proposed method shows efficient results by simulating several numerical experiments.


2013 ◽  
Vol 21 (3) ◽  
pp. 471-495 ◽  
Author(s):  
Carlos Echegoyen ◽  
Alexander Mendiburu ◽  
Roberto Santana ◽  
Jose A. Lozano

Understanding the relationship between a search algorithm and the space of problems is a fundamental issue in the optimization field. In this paper, we lay the foundations to elaborate taxonomies of problems under estimation of distribution algorithms (EDAs). By using an infinite population model and assuming that the selection operator is based on the rank of the solutions, we group optimization problems according to the behavior of the EDA. Throughout the definition of an equivalence relation between functions it is possible to partition the space of problems in equivalence classes in which the algorithm has the same behavior. We show that only the probabilistic model is able to generate different partitions of the set of possible problems and hence, it predetermines the number of different behaviors that the algorithm can exhibit. As a natural consequence of our definitions, all the objective functions are in the same equivalence class when the algorithm does not impose restrictions to the probabilistic model. The taxonomy of problems, which is also valid for finite populations, is studied in depth for a simple EDA that considers independence among the variables of the problem. We provide the sufficient and necessary condition to decide the equivalence between functions and then we develop the operators to describe and count the members of a class. In addition, we show the intrinsic relation between univariate EDAs and the neighborhood system induced by the Hamming distance by proving that all the functions in the same class have the same number of local optima and that they are in the same ranking positions. Finally, we carry out numerical simulations in order to analyze the different behaviors that the algorithm can exhibit for the functions defined over the search space [Formula: see text].


2009 ◽  
Vol 48 (03) ◽  
pp. 236-241 ◽  
Author(s):  
V. Robles ◽  
P. Larrañaga ◽  
C. Bielza

Summary Objectives: The “large k (genes), small N (samples)” phenomenon complicates the problem of microarray classification with logistic regression. The indeterminacy of the maximum likelihood solutions, multicollinearity of predictor variables and data over-fitting cause unstable parameter estimates. Moreover, computational problems arise due to the large number of predictor (genes) variables. Regularized logistic regression excels as a solution. However, the difficulties found here involve an objective function hard to be optimized from a mathematical viewpoint and a careful required tuning of the regularization parameters. Methods: Those difficulties are tackled by introducing a new way of regularizing the logistic regression. Estimation of distribution algorithms (EDAs), a kind of evolutionary algorithms, emerge as natural regularizers. Obtaining the regularized estimates of the logistic classifier amounts to maximizing the likelihood function via our EDA, without having to be penalized. Likelihood penalties add a number of difficulties to the resulting optimization problems, which vanish in our case. Simulation of new estimates during the evolutionary process of EDAs is performed in such a way that guarantees their shrinkage while maintaining their probabilistic dependence relationships learnt. The EDA process is embedded in an adapted recursive feature elimination procedure, thereby providing the genes that are best markers for the classification. Results: The consistency with the literature and excellent classification performance achieved with our algorithm are illustrated on four microarray data sets: Breast, Colon, Leukemia and Prostate. Details on the last two data sets are available as supplementary material. Conclusions: We have introduced a novel EDA-based logistic regression regularizer. It implicitly shrinks the coefficients during EDA evolution process while optimizing the usual likelihood function. The approach is combined with a gene subset selection procedure and automatically tunes the required parameters. Empirical results on microarray data sets provide sparse models with confirmed genes and performing better in classification than other competing regularized methods.


2005 ◽  
Vol 13 (1) ◽  
pp. 43-66 ◽  
Author(s):  
J. M. Peña ◽  
J. A. Lozano ◽  
P. Larrañaga

Many optimization problems are what can be called globally multimodal, i.e., they present several global optima. Unfortunately, this is a major source of difficulties for most estimation of distribution algorithms, making their effectiveness and efficiency degrade, due to genetic drift. With the aim of overcoming these drawbacks for discrete globally multimodal problem optimization, this paper introduces and evaluates a new estimation of distribution algorithm based on unsupervised learning of Bayesian networks. We report the satisfactory results of our experiments with symmetrical binary optimization problems.


Sign in / Sign up

Export Citation Format

Share Document