conditional independence
Recently Published Documents


TOTAL DOCUMENTS

480
(FIVE YEARS 96)

H-INDEX

34
(FIVE YEARS 3)

2022 ◽  
Vol 44 (1) ◽  
pp. 1-54
Author(s):  
Maria I. Gorinova ◽  
Andrew D. Gordon ◽  
Charles Sutton ◽  
Matthijs Vákár

A central goal of probabilistic programming languages (PPLs) is to separate modelling from inference. However, this goal is hard to achieve in practice. Users are often forced to re-write their models to improve efficiency of inference or meet restrictions imposed by the PPL. Conditional independence (CI) relationships among parameters are a crucial aspect of probabilistic models that capture a qualitative summary of the specified model and can facilitate more efficient inference. We present an information flow type system for probabilistic programming that captures conditional independence (CI) relationships and show that, for a well-typed program in our system, the distribution it implements is guaranteed to have certain CI-relationships. Further, by using type inference, we can statically deduce which CI-properties are present in a specified model. As a practical application, we consider the problem of how to perform inference on models with mixed discrete and continuous parameters. Inference on such models is challenging in many existing PPLs, but can be improved through a workaround, where the discrete parameters are used implicitly , at the expense of manual model re-writing. We present a source-to-source semantics-preserving transformation, which uses our CI-type system to automate this workaround by eliminating the discrete parameters from a probabilistic program. The resulting program can be seen as a hybrid inference algorithm on the original program, where continuous parameters can be drawn using efficient gradient-based inference methods, while the discrete parameters are inferred using variable elimination. We implement our CI-type system and its example application in SlicStan: a compositional variant of Stan. 1


Symmetry ◽  
2022 ◽  
Vol 14 (1) ◽  
pp. 149
Author(s):  
Waqar Khan ◽  
Lingfu Kong ◽  
Brekhna Brekhna ◽  
Ling Wang ◽  
Huigui Yan

Streaming feature selection has always been an excellent method for selecting the relevant subset of features from high-dimensional data and overcoming learning complexity. However, little attention is paid to online feature selection through the Markov Blanket (MB). Several studies based on traditional MB learning presented low prediction accuracy and used fewer datasets as the number of conditional independence tests is high and consumes more time. This paper presents a novel algorithm called Online Feature Selection Via Markov Blanket (OFSVMB) based on a statistical conditional independence test offering high accuracy and less computation time. It reduces the number of conditional independence tests and incorporates the online relevance and redundant analysis to check the relevancy between the upcoming feature and target variable T, discard the redundant features from Parents-Child (PC) and Spouses (SP) online, and find PC and SP simultaneously. The performance OFSVMB is compared with traditional MB learning algorithms including IAMB, STMB, HITON-MB, BAMB, and EEMB, and Streaming feature selection algorithms including OSFS, Alpha-investing, and SAOLA on 9 benchmark Bayesian Network (BN) datasets and 14 real-world datasets. For the performance evaluation, F1, precision, and recall measures are used with a significant level of 0.01 and 0.05 on benchmark BN and real-world datasets, including 12 classifiers keeping a significant level of 0.01. On benchmark BN datasets with 500 and 5000 sample sizes, OFSVMB achieved significant accuracy than IAMB, STMB, HITON-MB, BAMB, and EEMB in terms of F1, precision, recall, and running faster. It finds more accurate MB regardless of the size of the features set. In contrast, OFSVMB offers substantial improvements based on mean prediction accuracy regarding 12 classifiers with small and large sample sizes on real-world datasets than OSFS, Alpha-investing, and SAOLA but slower than OSFS, Alpha-investing, and SAOLA because these algorithms only find the PC set but not SP. Furthermore, the sensitivity analysis shows that OFSVMB is more accurate in selecting the optimal features.


2021 ◽  
Author(s):  
Giovanni Briganti ◽  
Marco Scutari ◽  
Richard J. McNally

Bayesian Networks are probabilistic graphical models that represent conditional independence relationships among variables as a directed acyclic graph (DAG), where edges can be interpreted as causal effects connecting one causal symptom to an effect symptom. These models can help overcome one of the key limitations of partial correlation networks whose edges are undirected. This tutorial aims to introduce Bayesian Networks to identify admissible causal relationships in cross-sectional data, as well as how to estimate these models in R through three algorithm families with an empirical example data set of depressive symptoms. In addition, we discuss common problems and questions related to Bayesian networks. We recommend Bayesian networks be investigated to gain causal insight in psychological data.


Author(s):  
Tobias Boege

AbstractThe gaussoid axioms are conditional independence inference rules which characterize regular Gaussian CI structures over a three-element ground set. It is known that no finite set of inference rules completely describes regular Gaussian CI as the ground set grows. In this article we show that the gaussoid axioms logically imply every inference rule of at most two antecedents which is valid for regular Gaussians over any ground set. The proof is accomplished by exhibiting for each inclusion-minimal gaussoid extension of at most two CI statements a regular Gaussian realization. Moreover we prove that all those gaussoids have rational positive-definite realizations inside every ε-ball around the identity matrix. For the proof we introduce the concept of algebraic Gaussians over arbitrary fields and of positive Gaussians over ordered fields and obtain the same two-antecedental completeness of the gaussoid axioms for algebraic and positive Gaussians over all fields of characteristic zero as a byproduct.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1571
Author(s):  
Sainyam Galhotra ◽  
Karthikeyan Shanmugam ◽  
Prasanna Sattigeri ◽  
Kush R. Varshney

The deployment of machine learning (ML) systems in applications with societal impact has motivated the study of fairness for marginalized groups. Often, the protected attribute is absent from the training dataset for legal reasons. However, datasets still contain proxy attributes that capture protected information and can inject unfairness in the ML model. Some deployed systems allow auditors, decision makers, or affected users to report issues or seek recourse by flagging individual samples. In this work, we examined such systems and considered a feedback-based framework where the protected attribute is unavailable and the flagged samples are indirect knowledge. The reported samples were used as guidance to identify the proxy attributes that are causally dependent on the (unknown) protected attribute. We worked under the causal interventional fairness paradigm. Without requiring the underlying structural causal model a priori, we propose an approach that performs conditional independence tests on observed data to identify such proxy attributes. We theoretically proved the optimality of our algorithm, bound its complexity, and complemented it with an empirical evaluation demonstrating its efficacy on various real-world and synthetic datasets.


2021 ◽  
pp. 190-212
Author(s):  
James Davidson

This chapter deals in depth with the concept of conditional expectation. This is defined first in the traditional “naïve” manner, and then using the measure theoretic approach. A comprehensive set of properties of the conditional expectation are proved, generalizing several results of Ch. 9, and then multiple sub‐σ‎‎‐fields and nesting are considered, concluding with a treatment of conditional distributions and conditional independence.


Entropy ◽  
2021 ◽  
Vol 23 (11) ◽  
pp. 1450
Author(s):  
Ádám Zlatniczki ◽  
Marcell Stippinger ◽  
Zsigmond Benkő ◽  
Zoltán Somogyvári ◽  
András Telcs

This work is about observational causal discovery for deterministic and stochastic dynamic systems. We explore what additional knowledge can be gained by the usage of standard conditional independence tests and if the interacting systems are located in a geodesic space.


Sign in / Sign up

Export Citation Format

Share Document