scholarly journals An Order-Independent Algorithm for Learning Chain Graphs

Author(s):  
Mohammad Ali Javidian ◽  
Marco Valtorta ◽  
Pooyan Jamshidi

LWF chain graphs combine directed acyclic graphs and undirected graphs. We propose a PC-like algorithm, called PC4LWF, that finds the structure of chain graphs under the faithfulness assumption to resolve the problem of scalability of the proposed algorithm by Studeny (1997). We prove that PC4LWF is order dependent, in the sense that the output can depend on the order in which the variables are given. This order dependence can be very pronounced in high-dimensional settings. We propose two modifications of the PC4LWF algorithm that remove part or all of this order dependence. Simulation results with different sample sizes, network sizes, and p-values demonstrate the competitive performance of the PC4LWF algorithms in comparison with the LCD algorithm proposed by Ma et al. (2008) in low-dimensional settings and improved performance (with regard to error measures) in high-dimensional settings.

2012 ◽  
Vol 40 (1) ◽  
pp. 294-321 ◽  
Author(s):  
Diego Colombo ◽  
Marloes H. Maathuis ◽  
Markus Kalisch ◽  
Thomas S. Richardson

2021 ◽  
Author(s):  
Víthor Rosa Franco ◽  
Guilherme Wang Barros ◽  
Marie Wiberg ◽  
Jacob Arie Laros

Reduction of graphs is a class of procedures used to decrease the dimensionality of a given graph in which the properties of the reduced graph are to be induced from the properties of the larger original graph. This paper introduces both a new method for reducing chain graphs to simpler directed acyclic graphs (DAGs), that we call power chain graphs (PCG), as well as a procedure for structure learning of this new type of graph from correlational data of a Gaussian Graphical model (GGM). A definition for PCGs is given, directly followed by the reduction method. The structure learning procedure is a two-step approach: first, the correlation matrix is used to cluster the variables; and then, the averaged correlation matrix is used to discover the DAGs using the PC-stable algorithm. The results of simulations are provided to illustrate the theoretical proposal, which demonstrate initial evidence for the validity of our procedure to recover the structure of power chain graphs. The paper ends with a discussion regarding suggestions for future studies as well as some practical implications.


Entropy ◽  
2019 ◽  
Vol 21 (10) ◽  
pp. 975
Author(s):  
Aleksander Wieczorek ◽  
Volker Roth

Modelling causal relationships has become popular across various disciplines. Most common frameworks for causality are the Pearlian causal directed acyclic graphs (DAGs) and the Neyman-Rubin potential outcome framework. In this paper, we propose an information theoretic framework for causal effect quantification. To this end, we formulate a two step causal deduction procedure in the Pearl and Rubin frameworks and introduce its equivalent which uses information theoretic terms only. The first step of the procedure consists of ensuring no confounding or finding an adjustment set with directed information. In the second step, the causal effect is quantified. We subsequently unify previous definitions of directed information present in the literature and clarify the confusion surrounding them. We also motivate using chain graphs for directed information in time series and extend our approach to chain graphs. The proposed approach serves as a translation between causality modelling and information theory.


Biostatistics ◽  
2018 ◽  
Vol 21 (4) ◽  
pp. 659-675
Author(s):  
Min Jin Ha ◽  
Wei Sun

Summary Directed acyclic graphs (DAGs) have been used to describe causal relationships between variables. The standard method for determining such relations uses interventional data. For complex systems with high-dimensional data, however, such interventional data are often not available. Therefore, it is desirable to estimate causal structure from observational data without subjecting variables to interventions. Observational data can be used to estimate the skeleton of a DAG and the directions of a limited number of edges. We develop a Bayesian framework to estimate a DAG using surrogate interventional data, where the interventions are applied to a set of external variables, and thus such interventions are considered to be surrogate interventions on the variables of interest. Our work is motivated by expression quantitative trait locus (eQTL) studies, where the variables of interest are the expression of genes, the external variables are DNA variations, and interventions are applied to DNA variants during the process of a randomly selected DNA allele being passed to a child from either parent. Our method, surrogate intervention recovery of a DAG ($\texttt{sirDAG}$), first constructs a DAG skeleton using penalized regressions and the subsequent partial correlation tests, and then estimates the posterior probabilities of all the edge directions after incorporating DNA variant data. We demonstrate the utilities of $\texttt{sirDAG}$ by simulation and an application to an eQTL study for 550 breast cancer patients.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 887
Author(s):  
An-An Lu ◽  
Yan Chen ◽  
Xiqi Gao

In this paper, we propose a novel broad coverage precoder design for three-dimensional (3D) massive multi-input multi-output (MIMO) equipped with huge uniform planar arrays (UPAs). The desired two-dimensional (2D) angle power spectrum is assumed to be separable. We use the per-antenna constant power constraint and the semi-unitary constraint which are widely used in the literature. For normal broad coverage precoder design, the dimension of the optimization space is the product of the number of antennas at the base station (BS) and the number of transmit streams. With the proposed method, the design of the high-dimensional precoding matrices is reduced to that of a set of low-dimensional orthonormal vectors, and of a pair of low-dimensional vectors. The dimensions of the vectors in the set and the pair are the number of antennas per column and per row of the UPA, respectively. We then use optimization methods to generate the set of orthonormal vectors and the pair of vectors, respectively. Finally, simulation results show that the proposed broad coverage precoding matrices achieve nearly the same performance as the normal broad coverage precoder with much lower computational complexity.


Sign in / Sign up

Export Citation Format

Share Document