Structural Fusion/Aggregation of Bayesian Networks via Greedy Equivalence Search Learning Algorithm

Author(s):  
Jose M. Puerta ◽  
Juan Ángel Aledo ◽  
José Antonio Gámez ◽  
Jorge D. Laborda
2016 ◽  
Vol 27 (1) ◽  
pp. 17-30 ◽  
Author(s):  
Yu Wang ◽  
Weikang Qian ◽  
Shuchang Zhang ◽  
Xiaoyao Liang ◽  
Bo Yuan

2016 ◽  
Vol 57 ◽  
pp. 1-37 ◽  
Author(s):  
Simone Villa ◽  
Fabio Stella

Non-stationary continuous time Bayesian networks are introduced. They allow the parents set of each node to change over continuous time. Three settings are developed for learning non-stationary continuous time Bayesian networks from data: known transition times, known number of epochs and unknown number of epochs. A score function for each setting is derived and the corresponding learning algorithm is developed. A set of numerical experiments on synthetic data is used to compare the effectiveness of non-stationary continuous time Bayesian networks to that of non-stationary dynamic Bayesian networks. Furthermore, the performance achieved by non-stationary continuous time Bayesian networks is compared to that achieved by state-of-the-art algorithms on four real-world datasets, namely drosophila, saccharomyces cerevisiae, songbird and macroeconomics.


2018 ◽  
Vol 16 (1) ◽  
pp. 1022-1036
Author(s):  
Jingyun Wang ◽  
Sanyang Liu

AbstractThe problem of structures learning in Bayesian networks is to discover a directed acyclic graph that in some sense is the best representation of the given database. Score-based learning algorithm is one of the important structure learning methods used to construct the Bayesian networks. These algorithms are implemented by using some heuristic search strategies to maximize the score of each candidate Bayesian network. In this paper, a bi-velocity discrete particle swarm optimization with mutation operator algorithm is proposed to learn Bayesian networks. The mutation strategy in proposed algorithm can efficiently prevent premature convergence and enhance the exploration capability of the population. We test the proposed algorithm on databases sampled from three well-known benchmark networks, and compare with other algorithms. The experimental results demonstrate the superiority of the proposed algorithm in learning Bayesian networks.


Author(s):  
Sotiris Kotsiantis ◽  
Dimitris Kanellopoulos ◽  
Panayotis Pintelas

In classification learning, the learning scheme is presented with a set of classified examples from which it is expected tone can learn a way of classifying unseen examples (see Table 1). Formally, the problem can be stated as follows: Given training data {(x1, y1)…(xn, yn)}, produce a classifier h: X- >Y that maps an object x ? X to its classification label y ? Y. A large number of classification techniques have been developed based on artificial intelligence (logic-based techniques, perception-based techniques) and statistics (Bayesian networks, instance-based techniques). No single learning algorithm can uniformly outperform other algorithms over all data sets. The concept of combining classifiers is proposed as a new direction for the improvement of the performance of individual machine learning algorithms. Numerous methods have been suggested for the creation of ensembles of classi- fiers (Dietterich, 2000). Although, or perhaps because, many methods of ensemble creation have been proposed, there is as yet no clear picture of which method is best.


Entropy ◽  
2018 ◽  
Vol 20 (4) ◽  
pp. 274 ◽  
Author(s):  
◽  

Dynamic Bayesian networks (DBN) are powerful probabilistic representations that model stochastic processes. They consist of a prior network, representing the distribution over the initial variables, and a set of transition networks, representing the transition distribution between variables over time. It was shown that learning complex transition networks, considering both intra- and inter-slice connections, is NP-hard. Therefore, the community has searched for the largest subclass of DBNs for which there is an efficient learning algorithm. We introduce a new polynomial-time algorithm for learning optimal DBNs consistent with a breadth-first search (BFS) order, named bcDBN. The proposed algorithm considers the set of networks such that each transition network has a bounded in-degree, allowing for p edges from past time slices (inter-slice connections) and k edges from the current time slice (intra-slice connections) consistent with the BFS order induced by the optimal tree-augmented network (tDBN). This approach increases exponentially, in the number of variables, the search space of the state-of-the-art tDBN algorithm. Concerning worst-case time complexity, given a Markov lag m, a set of n random variables ranging over r values, and a set of observations of N individuals over T time steps, the bcDBN algorithm is linear in N, T and m; polynomial in n and r; and exponential in p and k. We assess the bcDBN algorithm on simulated data against tDBN, revealing that it performs well throughout different experiments.


2009 ◽  
Vol 18 (05) ◽  
pp. 739-755
Author(s):  
YIXIN CHEN ◽  
DONG HUA ◽  
FANG LIU

Latent class analysis is a popular statistical learning approach. A major challenge for learning generalized latent class is the complexity in searching the huge space of models and parameters. The computational cost is higher when the model topology is more flexible. In this paper, we propose the notion of dominance which can lead to strong pruning of the search space and significant reduction of learning complexity, and apply this notion to the Generalized Latent Class (GLC) models, a class of Bayesian networks for clustering categorical data. GLC models can address the local dependence problem in latent class analysis by assuming a very general graph structure. However, The flexible topology of GLC leads to large increase of the learning complexity. We first propose the concept of dominance and related theoretical results which is general for all Bayesian networks. Based on dominance, we propose an efficient learning algorithm for GLC. A core technique to prune dominated models is regularization, which can eliminate dominated models, leading to significant pruning of the search space. Significant improvements on the model.


Sign in / Sign up

Export Citation Format

Share Document