STRUCTURE-LEARNING OF CAUSAL BAYESIAN NETWORKS BASED ON ADJACENT NODES

2013 ◽  
Vol 22 (02) ◽  
pp. 1350005 ◽  
Author(s):  
XIA LIU ◽  
YOULONG YANG ◽  
MINGMIN ZHU

Due to the infeasibility of randomized controlled experiments, the existence of unobserved variables and the fact that equivalent direct acyclic graphs obtained generally can not be distinguished, it is difficult to learn the true causal relations of original graph. This paper presents an algorithm called BSPC based on adjacent nodes to learn the structure of Causal Bayesian Networks with unobserved variables by using observational data. It does not have to adjust the structure as the existing algorithms FCI and MBCS*, while it can guarantee to obtain the true adjacent nodes. More important is that algorithm BSPC reduces computational complexity and improves reliability of conditional independence tests. Theoretical results show that the new algorithm is correct. In addition, the advantages of BSPC in terms of the number of conditional independence tests and the number of orientation errors are illustrated with simulation experiments from which we can see that it is more suitable in order to learn the structure of Causal Bayesian Networks with latent variables. Moreover a better latent structure representation is returned.

2009 ◽  
Vol 35 ◽  
pp. 449-484 ◽  
Author(s):  
F. Bromberg ◽  
D. Margaritis ◽  
V. Honavar

We present two algorithms for learning the structure of a Markov network from data: GSMN* and GSIMN. Both algorithms use statistical independence tests to infer the structure by successively constraining the set of structures consistent with the results of these tests. Until very recently, algorithms for structure learning were based on maximum likelihood estimation, which has been proved to be NP-hard for Markov networks due to the difficulty of estimating the parameters of the network, needed for the computation of the data likelihood. The independence-based approach does not require the computation of the likelihood, and thus both GSMN* and GSIMN can compute the structure efficiently (as shown in our experiments). GSMN* is an adaptation of the Grow-Shrink algorithm of Margaritis and Thrun for learning the structure of Bayesian networks. GSIMN extends GSMN* by additionally exploiting Pearl's well-known properties of the conditional independence relation to infer novel independences from known ones, thus avoiding the performance of statistical tests to estimate them. To accomplish this efficiently GSIMN uses the Triangle theorem, also introduced in this work, which is a simplified version of the set of Markov axioms. Experimental comparisons on artificial and real-world data sets show GSIMN can yield significant savings with respect to GSMN*, while generating a Markov network with comparable or in some cases improved quality. We also compare GSIMN to a forward-chaining implementation, called GSIMN-FCH, that produces all possible conditional independences resulting from repeatedly applying Pearl's theorems on the known conditional independence tests. The results of this comparison show that GSIMN, by the sole use of the Triangle theorem, is nearly optimal in terms of the set of independences tests that it infers.


2021 ◽  
Author(s):  
Víthor Rosa Franco ◽  
Guilherme Wang Barros ◽  
Marie Wiberg ◽  
Jacob Arie Laros

Reduction of graphs is a class of procedures used to decrease the dimensionality of a given graph in which the properties of the reduced graph are to be induced from the properties of the larger original graph. This paper introduces both a new method for reducing chain graphs to simpler directed acyclic graphs (DAGs), that we call power chain graphs (PCG), as well as a procedure for structure learning of this new type of graph from correlational data of a Gaussian Graphical model (GGM). A definition for PCGs is given, directly followed by the reduction method. The structure learning procedure is a two-step approach: first, the correlation matrix is used to cluster the variables; and then, the averaged correlation matrix is used to discover the DAGs using the PC-stable algorithm. The results of simulations are provided to illustrate the theoretical proposal, which demonstrate initial evidence for the validity of our procedure to recover the structure of power chain graphs. The paper ends with a discussion regarding suggestions for future studies as well as some practical implications.


2018 ◽  
Vol 34 (2) ◽  
pp. 713-742
Author(s):  
Xiao Guo ◽  
Hai Zhang ◽  
Yao Wang ◽  
Yong Liang

Sign in / Sign up

Export Citation Format

Share Document