scholarly journals The Inflation Technique Completely Solves the Causal Compatibility Problem

2020 ◽  
Vol 8 (1) ◽  
pp. 70-91 ◽  
Author(s):  
Miguel Navascués ◽  
Elie Wolfe

AbstractThe causal compatibility question asks whether a given causal structure graph — possibly involving latent variables — constitutes a genuinely plausible causal explanation for a given probability distribution over the graph’s observed categorical variables. Algorithms predicated on merely necessary constraints for causal compatibility typically suffer from false negatives, i.e. they admit incompatible distributions as apparently compatible with the given graph. In 10.1515/jci-2017-0020, one of us introduced the inflation technique for formulating useful relaxations of the causal compatibility problem in terms of linear programming. In this work, we develop a formal hierarchy of such causal compatibility relaxations. We prove that inflation is asymptotically tight, i.e., that the hierarchy converges to a zero-error test for causal compatibility. In this sense, the inflation technique fulfills a longstanding desideratum in the field of causal inference. We quantify the rate of convergence by showing that any distribution which passes the nth-order inflation test must be $\begin{array}{} \displaystyle {O}{\left(n^{{{-}{1}}/{2}}\right)} \end{array}$-close in Euclidean norm to some distribution genuinely compatible with the given causal structure. Furthermore, we show that for many causal structures, the (unrelaxed) causal compatibility problem is faithfully formulated already by either the first or second order inflation test.

2019 ◽  
Vol 44 (4) ◽  
pp. 367-389 ◽  
Author(s):  
Yongnam Kim

Suppression effects in multiple linear regression are one of the most elusive phenomena in the educational and psychological measurement literature. The question is, How can including a variable, which is completely unrelated to the criterion variable, in regression models significantly increase the predictive power of the regression models? In this article, we view suppression from a causal perspective and uncover the causal structure of suppressor variables. Using causal discovery algorithms, we show that classical suppressors defined by Horst (1941) are generated from causal structures which reveal the equivalence between suppressors and instrumental variables. Although the educational and psychological measurement literature has long recommended that researchers include suppressors in regression models, the causal inference literature has recently recommended that researchers exclude instrumental variables. The conflicting views between the two disciplines can be resolved by considering the different purposes of statistical models, prediction and causal explanation.


Quantum ◽  
2020 ◽  
Vol 4 ◽  
pp. 236 ◽  
Author(s):  
Mirjam Weilenmann ◽  
Roger Colbeck

Causal structures give us a way to understand the origin of observed correlations. These were developed for classical scenarios, but quantum mechanical experiments necessitate their generalisation. Here we study causal structures in a broad range of theories, which include both quantum and classical theory as special cases. We propose a method for analysing differences between such theories based on the so-called measurement entropy. We apply this method to several causal structures, deriving new relations that separate classical, quantum and more general theories within these causal structures. The constraints we derive for the most general theories are in a sense minimal requirements of any causal explanation in these scenarios. In addition, we make several technical contributions that give insight for the entropic analysis of quantum causal structures. In particular, we prove that for any causal structure and for any generalised probabilistic theory, the set of achievable entropy vectors form a convex cone.


Entropy ◽  
2021 ◽  
Vol 23 (1) ◽  
pp. 114
Author(s):  
Michael Silberstein ◽  
William Mark Stuckey ◽  
Timothy McDevitt

Our account provides a local, realist and fully non-causal principle explanation for EPR correlations, contextuality, no-signalling, and the Tsirelson bound. Indeed, the account herein is fully consistent with the causal structure of Minkowski spacetime. We argue that retrocausal accounts of quantum mechanics are problematic precisely because they do not fully transcend the assumption that causal or constructive explanation must always be fundamental. Unlike retrocausal accounts, our principle explanation is a complete rejection of Reichenbach’s Principle. Furthermore, we will argue that the basis for our principle account of quantum mechanics is the physical principle sought by quantum information theorists for their reconstructions of quantum mechanics. Finally, we explain why our account is both fully realist and psi-epistemic.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Jonathan Barrett ◽  
Robin Lorenz ◽  
Ognyan Oreshkov

AbstractCausal reasoning is essential to science, yet quantum theory challenges it. Quantum correlations violating Bell inequalities defy satisfactory causal explanations within the framework of classical causal models. What is more, a theory encompassing quantum systems and gravity is expected to allow causally nonseparable processes featuring operations in indefinite causal order, defying that events be causally ordered at all. The first challenge has been addressed through the recent development of intrinsically quantum causal models, allowing causal explanations of quantum processes – provided they admit a definite causal order, i.e. have an acyclic causal structure. This work addresses causally nonseparable processes and offers a causal perspective on them through extending quantum causal models to cyclic causal structures. Among other applications of the approach, it is shown that all unitarily extendible bipartite processes are causally separable and that for unitary processes, causal nonseparability and cyclicity of their causal structure are equivalent.


2021 ◽  
Vol 11 (1) ◽  
pp. 232-240
Author(s):  
Alexander V. Khorkov ◽  
Shamil I. Galiev

Abstract A numerical method for investigating k-coverings of a convex bounded set with circles of two given radii is proposed. Cases with constraints on the distances between the covering circle centers are considered. An algorithm for finding an approximate number of such circles and the arrangement of their centers is described. For certain specific cases, approximate lower bounds of the density of the k-covering of the given domain are found. We use either 0–1 linear programming or general integer linear programming models. Numerical results demonstrating the effectiveness of the proposed methods are presented.


2019 ◽  
Author(s):  
Hudson Golino ◽  
Robert Glenn Moulder ◽  
Dingjing Shi ◽  
Alexander P. Christensen ◽  
Luis E. Garrido ◽  
...  

The accurate identification of the content and number of latent factors underlying multivariate data is an important endeavor in many areas of Psychology and related fields. Recently, a new dimensionality assessment technique based on network psychometrics was proposed (Exploratory Graph Analysis, EGA), but a measure to check the fit of the dimensionality structure to the data estimated via EGA is still lacking. Although traditional factor-analytic fit measures are widespread, recent research has identified limitations for their effectiveness in categorical variables. Here, we propose three new fit measures (termed entropy fit indices) that combines information theory, quantum information theory and structural analysis: Entropy Fit Index (EFI), EFI with Von Neumman Entropy (EFI.vn) and Total EFI.vn (TEFI.vn). The first can be estimated in complete datasets using Shannon entropy, while EFI.vn and TEFI.vn can be estimated in correlation matrices using quantum information metrics. We show, through several simulations, that TEFI.vn, EFI.vn and EFI are as accurate or more accurate than traditional fit measures when identifying the number of simulated latent factors. However, in conditions where more factors are extracted than the number of factors simulated, only TEFI.vn presents a very high accuracy. In addition, we provide an applied example that demonstrates how the new fit measures can be used with a real-world dataset, using exploratory graph analysis.


Author(s):  
Romain Brette

Abstract “Neural coding” is a popular metaphor in neuroscience, where objective properties of the world are communicated to the brain in the form of spikes. Here I argue that this metaphor is often inappropriate and misleading. First, when neurons are said to encode experimental parameters, the neural code depends on experimental details that are not carried by the coding variable (e.g., the spike count). Thus, the representational power of neural codes is much more limited than generally implied. Second, neural codes carry information only by reference to things with known meaning. In contrast, perceptual systems must build information from relations between sensory signals and actions, forming an internal model. Neural codes are inadequate for this purpose because they are unstructured and therefore unable to represent relations. Third, coding variables are observables tied to the temporality of experiments, whereas spikes are timed actions that mediate coupling in a distributed dynamical system. The coding metaphor tries to fit the dynamic, circular, and distributed causal structure of the brain into a linear chain of transformations between observables, but the two causal structures are incongruent. I conclude that the neural coding metaphor cannot provide a valid basis for theories of brain function, because it is incompatible with both the causal structure of the brain and the representational requirements of cognition.


2014 ◽  
Vol 6 (2) ◽  
pp. 46-62
Author(s):  
Nikolaos Ploskas ◽  
Nikolaos Samaras ◽  
Jason Papathanasiou

Linear programming algorithms have been widely used in Decision Support Systems. These systems have incorporated linear programming algorithms for the solution of the given problems. Yet, the special structure of each linear problem may take advantage of different linear programming algorithms or different techniques used in these algorithms. This paper proposes a web-based DSS that assists decision makers in the solution of linear programming problems with a variety of linear programming algorithms and techniques. Two linear programming algorithms have been included in the DSS: (i) revised simplex algorithm and (ii) exterior primal simplex algorithm. Furthermore, ten scaling techniques, five basis update methods and eight pivoting rules have been incorporated in the DSS. All linear programming algorithms and methods have been implemented using MATLAB and converted to Java classes using MATLAB Builder JA, while the web interface of the DSS has been designed using Java Server Pages.


2019 ◽  
Vol 7 (2) ◽  
Author(s):  
Elie Wolfe ◽  
Robert W. Spekkens ◽  
Tobias Fritz

AbstractThe problem of causal inference is to determine if a given probability distribution on observed variables is compatible with some causal structure. The difficult case is when the causal structure includes latent variables. We here introduce the inflation technique for tackling this problem. An inflation of a causal structure is a new causal structure that can contain multiple copies of each of the original variables, but where the ancestry of each copy mirrors that of the original. To every distribution of the observed variables that is compatible with the original causal structure, we assign a family of marginal distributions on certain subsets of the copies that are compatible with the inflated causal structure. It follows that compatibility constraints for the inflation can be translated into compatibility constraints for the original causal structure. Even if the constraints at the level of inflation are weak, such as observable statistical independences implied by disjoint causal ancestry, the translated constraints can be strong. We apply this method to derive new inequalities whose violation by a distribution witnesses that distribution’s incompatibility with the causal structure (of which Bell inequalities and Pearl’s instrumental inequality are prominent examples). We describe an algorithm for deriving all such inequalities for the original causal structure that follow from ancestral independences in the inflation. For three observed binary variables with pairwise common causes, it yields inequalities that are stronger in at least some aspects than those obtainable by existing methods. We also describe an algorithm that derives a weaker set of inequalities but is more efficient. Finally, we discuss which inflations are such that the inequalities one obtains from them remain valid even for quantum (and post-quantum) generalizations of the notion of a causal model.


Sign in / Sign up

Export Citation Format

Share Document