conditional independence test
Recently Published Documents


TOTAL DOCUMENTS

9
(FIVE YEARS 7)

H-INDEX

2
(FIVE YEARS 1)

Author(s):  
Hao Zhang ◽  
Shuigeng Zhou ◽  
Chuanxu Yan ◽  
Jihong Guan ◽  
Xin Wang

This paper addresses two important issues in causality inference. One is how to reduce redundant conditional independence (CI) tests, which heavily impact the efficiency and accuracy of existing constraint-based methods. Another is how to construct the true causal graph from a set of Markov equivalence classes returned by these methods.For the first issue, we design a recursive decomposition approach where the original data (a set of variables) is first decomposed into three small subsets, each of which is then recursively decomposed into three smaller subsets until none of subsets can be decomposed further. Consequently, redundant CI tests can be reduced by inferring causality from these subsets. Advantage of this decomposition scheme lies in two aspects: 1) it requires only low-order CI tests, and 2) it does not violate d-separation. Thus, the complete causality can be reconstructed by merging all the partial results of the subsets.For the second issue, we employ regression-based conditional independence test to check CIs in linear non-Gaussian additive noise cases, which can identify more causal directions by x−E(x|Z)⊥z (or y−E(y|Z)⊥z). Therefore, causal direction learning is no longer limited by the number of returned Vstructures and the consistent propagation.Extensive experiments show that the proposed method can not only substantially reduce redundant CI tests but also effectively distinguish the equivalence classes, thus is superior to the state of the art constraint-based methods in causality inference.


2019 ◽  
Vol 7 (1) ◽  
Author(s):  
Eric V. Strobl ◽  
Kun Zhang ◽  
Shyam Visweswaran

AbstractConstraint-based causal discovery (CCD) algorithms require fast and accurate conditional independence (CI) testing. The Kernel Conditional Independence Test (KCIT) is currently one of the most popular CI tests in the non-parametric setting, but many investigators cannot use KCIT with large datasets because the test scales at least quadratically with sample size. We therefore devise two relaxations called the Randomized Conditional Independence Test (RCIT) and the Randomized conditional Correlation Test (RCoT) which both approximate KCIT by utilizing random Fourier features. In practice, both of the proposed tests scale linearly with sample size and return accurate p-values much faster than KCIT in the large sample size context. CCD algorithms run with RCIT or RCoT also return graphs at least as accurate as the same algorithms run with KCIT but with large reductions in run time.


Sign in / Sign up

Export Citation Format

Share Document