Conditional Independence Test of Failure and Truncation Times: Essential Tool for Method Selection

Author(s):  
Jing Ning ◽  
Daewoo Pak ◽  
Hong Zhu ◽  
Jing Qin
Author(s):  
Hao Zhang ◽  
Shuigeng Zhou ◽  
Chuanxu Yan ◽  
Jihong Guan ◽  
Xin Wang

This paper addresses two important issues in causality inference. One is how to reduce redundant conditional independence (CI) tests, which heavily impact the efficiency and accuracy of existing constraint-based methods. Another is how to construct the true causal graph from a set of Markov equivalence classes returned by these methods.For the first issue, we design a recursive decomposition approach where the original data (a set of variables) is first decomposed into three small subsets, each of which is then recursively decomposed into three smaller subsets until none of subsets can be decomposed further. Consequently, redundant CI tests can be reduced by inferring causality from these subsets. Advantage of this decomposition scheme lies in two aspects: 1) it requires only low-order CI tests, and 2) it does not violate d-separation. Thus, the complete causality can be reconstructed by merging all the partial results of the subsets.For the second issue, we employ regression-based conditional independence test to check CIs in linear non-Gaussian additive noise cases, which can identify more causal directions by x−E(x|Z)⊥z (or y−E(y|Z)⊥z). Therefore, causal direction learning is no longer limited by the number of returned Vstructures and the consistent propagation.Extensive experiments show that the proposed method can not only substantially reduce redundant CI tests but also effectively distinguish the equivalence classes, thus is superior to the state of the art constraint-based methods in causality inference.


2019 ◽  
Vol 7 (1) ◽  
Author(s):  
Eric V. Strobl ◽  
Kun Zhang ◽  
Shyam Visweswaran

AbstractConstraint-based causal discovery (CCD) algorithms require fast and accurate conditional independence (CI) testing. The Kernel Conditional Independence Test (KCIT) is currently one of the most popular CI tests in the non-parametric setting, but many investigators cannot use KCIT with large datasets because the test scales at least quadratically with sample size. We therefore devise two relaxations called the Randomized Conditional Independence Test (RCIT) and the Randomized conditional Correlation Test (RCoT) which both approximate KCIT by utilizing random Fourier features. In practice, both of the proposed tests scale linearly with sample size and return accurate p-values much faster than KCIT in the large sample size context. CCD algorithms run with RCIT or RCoT also return graphs at least as accurate as the same algorithms run with KCIT but with large reductions in run time.


Author(s):  
Pengfei Liu ◽  
Xuejun Ma ◽  
Wang Zhou

We construct a high-order conditional distance covariance, which generalizes the notation of conditional distance covariance. The joint conditional distance covariance is defined as a linear combination of conditional distance covariances, which can capture the joint relation of many random vectors given one vector. Furthermore, we develop a new method of conditional independence test based on the joint conditional distance covariance. Simulation results indicate that the proposed method is very effective. We also apply our method to analyze the relationships of PM2.5 in five Chinese cities: Beijing, Tianjin, Jinan, Tangshan and Qinhuangdao by the Gaussian graphical model.


Sign in / Sign up

Export Citation Format

Share Document