scholarly journals Structure Learning of Bayesian Networks by Estimation of Distribution Algorithms with Transpose Mutation

2013 ◽  
Vol 11 (4) ◽  
pp. 586-596 ◽  
Author(s):  
D.W. Kim ◽  
S. Ko ◽  
B.Y. Kang
2014 ◽  
Vol 926-930 ◽  
pp. 3594-3597
Author(s):  
Cai Chang Ding ◽  
Wen Xiu Peng ◽  
Wei Ming Wang

Estimation of Distribution Algorithms (EDAs) are a set of algorithms that belong to the field of Evolutionary Computation. In EDAs there are neither crossover nor mutation operators. Instead, the new population of individuals is sampled from a probability distribution, which is estimated from a database that contains the selected individuals from the previous generation. Thus, the interrelations between the different variables that represent the individuals may be explicitly expressed through the joint probability distribution associated with the individuals selected at each generation.


2014 ◽  
Vol 926-930 ◽  
pp. 3294-3297
Author(s):  
Cai Chang Ding ◽  
Wen Xiu Peng ◽  
Wei Ming Wang

In this paper, we study the ability limit of EDAs to effectively solve problems in relation to the number of interactions among the variables. More in particular, we numerically analyze the learning limits that different EDA implementations encounter to solve problems on a sequence of additively decomposable functions (ADFs) in which new sub-functions are progressively added. The study is carried out in a worst-case scenario where the sub-functions are defined as deceptive functions. We argue that the limits for this type of algorithm are mainly imposed by the probabilistic model they rely on. Beyond the limitations of the approximate learning methods, the results suggest that, in general, the use of bayesian networks can entail strong computational restrictions to overcome the limits of applicability.


2005 ◽  
Vol 13 (1) ◽  
pp. 43-66 ◽  
Author(s):  
J. M. Peña ◽  
J. A. Lozano ◽  
P. Larrañaga

Many optimization problems are what can be called globally multimodal, i.e., they present several global optima. Unfortunately, this is a major source of difficulties for most estimation of distribution algorithms, making their effectiveness and efficiency degrade, due to genetic drift. With the aim of overcoming these drawbacks for discrete globally multimodal problem optimization, this paper introduces and evaluates a new estimation of distribution algorithm based on unsupervised learning of Bayesian networks. We report the satisfactory results of our experiments with symmetrical binary optimization problems.


Author(s):  
TXOMIN ROMERO ◽  
PEDRO LARRAÑAGA ◽  
BASILIO SIERRA

The search for the optimal ordering of a set of variables in order to solve a computational problem is a difficulty that can appear in several circumstances. One of these situations is the automatic learning of a network structure, for example, a Bayesian Network structure (BN) starting from a dataset. Searching in the space of structures is often unmanageable, especially if the number of variables is high. Popular heuristic approaches, like Cooper and Herskovits's K2 algorithm, depend on a given ordering of variables. Estimation of Distribution Algorithms (EDAs) are a new paradigm for Evolutionary Computation that have been used as a search engine in the BN structure learning problem. In this paper, we will use two different EDAs to obtain not the best structure, but the optimal ordering of variables for the K2 algorithm: UMDA and MIMIC, both of them in discrete and continuous domains. We will also check whether the individual representation and its relation to the corresponding ordering play important roles, and whether MIMIC outperforms the results of UMDA.


2013 ◽  
Vol 284-287 ◽  
pp. 3093-3096
Author(s):  
Dae Won Kim ◽  
Song Ko ◽  
Bo Yeong Kang

Estimation of distribution algorithms (EDAs) constitute a new branch of evolutionary optimization algorithms, providing effective and efficient optimization performance in a variety of research areas. Recent studies have proposed new EDAs that employ mutation operators in standard EDAs to increase the population diversity. We present a new mutation operator, a matrix transpose, specifically designed for Bayesian structure learning, and we evaluate its performance in Bayesian structure learning. The results indicate that EDAs with transpose mutation give markedly better performance than conventional EDAs.


Sign in / Sign up

Export Citation Format

Share Document