Leaning Causal Models with Conditional Causal Probabilities from Data

Author(s):  
Koichi Yamada ◽  

We propose a way to lean probabilistic causal models using conditional causal probabilities (CCPs) to represent uncertainty of causalities. The CCP is a probability devised by Peng and Reggia representing the uncertainty that a cause actually causes an effect given the cause. The main advantage of using CCPs is that they represent exact probabilities of causalities that people recognize mentally, and that the number of probabilities used in the causal model is far smaller than that of conditional probabilities by all combinations of possible causes. Thus, Peng and Reggia assumed that CCPs are given by human experts as subjective ones, and did not discuss how to calculate them from data when a dataset was available. We address this problem, starting from a discussion about properties of data frequently given in practical problems, and shows that prior probabilities that should be learned may differ from those derived by counting data. We then discuss and propose how to learn prior probabilities and CCPs from data, and evaluate the proposed method through numerical experiments and analyze results to show that the precision of leaned models is satisfactory.

Author(s):  
David A. Lagnado ◽  
Tobias Gerstenberg

Causation looms large in legal and moral reasoning. People construct causal models of the social and physical world to understand what has happened, how and why, and to allocate responsibility and blame. This chapter explores people’s common-sense notion of causation, and shows how it underpins moral and legal judgments. As a guiding framework it uses the causal model framework (Pearl, 2000) rooted in structural models and counterfactuals, and shows how it can resolve many of the problems that beset standard but-for analyses. It argues that legal concepts of causation are closely related to everyday causal reasoning, and both are tailored to the practical concerns of responsibility attribution. Causal models are also critical when people evaluate evidence, both in terms of the stories they tell to make sense of evidence, and the methods they use to assess its credibility and reliability.


Author(s):  
Mike Oaksford ◽  
Nick Chater

There are deep intuitions that the meaning of conditional statements relate to probabilistic law-like dependencies. In this chapter it is argued that these intuitions can be captured by representing conditionals in causal Bayes nets (CBNs) and that this conjecture is theoretically productive. This proposal is borne out in a variety of results. First, causal considerations can provide a unified account of abstract and causal conditional reasoning. Second, a recent model (Fernbach & Erb, 2013) can be extended to the explicit causal conditional reasoning paradigm (Byrne, 1989), making some novel predictions on the way. Third, when embedded in the broader cognitive system involved in reasoning, causal model theory can provide a novel explanation for apparent violations of the Markov condition in causal conditional reasoning (Ali et al, 2011). Alternative explanations are also considered (see, Rehder, 2014a) with respect to this evidence. While further work is required, the chapter concludes that the conjecture that conditional reasoning is underpinned by representations and processes similar to CBNs is indeed a productive line of research.


2021 ◽  
Author(s):  
Kun Huo ◽  
Khim Kelly ◽  
Alan Webb

Firms often use causal models to align decision-making with strategic objectives. However, firms often operate in changing environments such that an accurate causal model can become inaccurate. Prior research has not examined the consequences a change in the accuracy of causal models may have for managerial learning. Using an experiment, we predict and find that providing an accurate causal model positively affects managerial learning, and this positive effect is not reduced by encouraging a hypothesis-testing mindset (HTM). However, when the model subsequently becomes inaccurate, we predict and observe that providing a causal model alone negatively affects managerial learning, although this effect is partially mitigated by additionally encouraging a HTM. Our results can inform designers of control systems about the potential implications of providing a causal model when its accuracy changes over time and demonstrate how simple encouragement of a HTM moderates the effects of providing a causal model.


1970 ◽  
Vol 64 (4) ◽  
pp. 1099-1111 ◽  
Author(s):  
H. M. Blalock

The purpose of this paper is to examine several specific kinds of nonrandom measurement errors and to note their implications for causal model construction. In doing so, my secondary purpose is to sensitize the reader to the crucial importance of making one's assumptions fully explicit and to the advantages of a causal models approach to measurement errors. It is well known that the presence of even random measurement errors can produce serious distortions in our estimates, particularly whenever one is attempting to assess the relative contributions of intercorrelated independent variables. Nevertheless, common practice is to utilize what Duncan refers to as the naive approach to the presence of measurement errors: that of acknowledging the existence of measurement errors, and even discussing possible sources of such errors, while completely ignoring them in the analysis stage of the research process. That is, measured values are inserted directly into causal models as though they adequately reflect the true values. It can easily be shown that such a practice, while leading to important simplifications, can readily lead one astray. In particular, it may blind the analyst to searching for alternative plausible explanations that allow for measurement error.There have been a number of very recent papers in the sociological literature, some of which will be briefly summarized since they may not be familiar to the reader. For the most part, these papers have dealt rather systematically with ways to handle random measurement errors, whereas nonrandom errors have been dealt with only incidentally and much less carefully.


Quantum ◽  
2020 ◽  
Vol 4 ◽  
pp. 240 ◽  
Author(s):  
Philipp Strasberg

Operational quantum stochastic thermodynamics is a recently proposed theory to study the thermodynamics of open systems based on the rigorous notion of a quantum stochastic process or quantum causal model. In there, a stochastic trajectory is defined solely in terms of experimentally accessible measurement results, which serve as the basis to define the corresponding thermodynamic quantities. In contrast to this observer-dependent point of view, a `black box', which evolves unitarily and can simulate a quantum causal model, is constructed here. The quantum thermodynamics of this big isolated system can then be studied using widely accepted arguments from statistical mechanics. It is shown that the resulting definitions of internal energy, heat, work, and entropy have a natural extension to the trajectory level. The canonical choice of them coincides with the proclaimed definitions of operational quantum stochastic thermodynamics, thereby providing strong support in favour of that novel framework. However, a few remaining ambiguities in the definition of stochastic work and heat are also discovered and in light of these findings some other proposals are reconsidered. Finally, it is demonstrated that the first and second law hold for an even wider range of scenarios than previously thought, covering a large class of quantum causal models based solely on a single assumption about the initial system-bath state.


Author(s):  
Seiki Ubukata ◽  
◽  
Hiroki Kato ◽  
Akira Notsu ◽  
Katsuhiro Honda

Representing the positive, possible, and boundary regions of clusters, rough set-based C-means clustering methods, such as generalized rough C-means (GRCM) and rough set C-means (RSCM), are promising for analyzing vague cluster shapes and realizing reliable classification. In this study, we consider rough set-based clustering approaches that utilize probabilistic memberships as variants of GRCM and RSCM, including π generalized rough C-means (πGRCM), π rough set C-means (πRSCM), and rough membership C-means (RMCM). πGRCM and πRSCM assign equal probabilities of cluster belonging according to Laplace’s principle of indifference, whereas RMCM assigns the probabilities according to rough memberships, which represent conditional probabilities based on the object’s neighborhood derived from a binary relation. In addition, we discuss the theoretical validity of our RMCM approach and compare it with other methods considered in this study. Furthermore, we conducted numerical experiments for evaluating the classification performances of the abovementioned methods. Based on our experimental results, the methods were found to be effective.


Author(s):  
Gary Goertz ◽  
James Mahoney

This chapter compares two causal models used in qualitative and quantitative research: an additive-linear model and a set-theoretic model. The additive-linear causal model is common in the statistical culture, whereas the set-theoretic model is often used (implicitly) in the qualitative culture. After providing an overview of the two causal models, the chapter considers the main differences between them. It then gives an example to illustrate how a set-theoretic causal model is implicitly used in the within-case analysis of a specific outcome. It also explains how the form of causal complexity varies across the quantitative and qualitative paradigms. Finally, it examines another difference between the causal models used in quantitative and qualitative research, one that revolves around the concept of “equifinality” or “multiple causation.” The chapter suggests that while the two causal models are quite different, neither is a priori correct.


2016 ◽  
Vol 19 (4) ◽  
pp. 488-517 ◽  
Author(s):  
Cory P. Haberman

This study used observations of crime strategy meetings and interviews with police commanders to “get inside the black box of hot spots policing.” The findings focus on what the studied police commanders believed they were doing and why they believed those tactics would be effective during hot spots policing implemented under non-experimental conditions. An example causal model for the effectiveness of hot spots policing that emerged from the data is presented. While the commanders’ views aligned with commonly used policing tactics and crime control theories, their underlying theoretical rationale is complex. The presented model provides one causal model that could be tested in future hot spots policing evaluations, and a discussion is presented of how the study’s methodology can be applied in other jurisdictions to define localized causal models and improve hot spot policing evaluations.


Erkenntnis ◽  
2021 ◽  
Author(s):  
Naftali Weinberger

AbstractCausal representations are distinguished from non-causal ones by their ability to predict the results of interventions. This widely-accepted view suggests the following adequacy condition for causal models: a causal model is adequate only if it does not contain variables regarding which it makes systematically false predictions about the results of interventions. Here I argue that this condition should be rejected. For a class of equilibrium systems, there will be two incompatible causal models depending on whether one intervenes upon a certain variable to fix its value, or ‘lets go’ of the variable and allows it to vary. The latter model will fail to predict the result of interventions on the let-go-of variable. I argue that there is no basis for preferring one of these models to the other, and thus that models failing to predict interventions on particular variables can be just as adequate as those making no such false predictions. This undermines a key argument (Dash in Caveats for causal reasoning with equilibrium models. University of Pittsburgh. PhD thesis, 2003) against relying upon causal models inferred from equilibrium data.


2020 ◽  
Vol 34 (03) ◽  
pp. 2493-2500
Author(s):  
Prashan Madumal ◽  
Tim Miller ◽  
Liz Sonenberg ◽  
Frank Vetere

Prominent theories in cognitive science propose that humans understand and represent the knowledge of the world through causal relationships. In making sense of the world, we build causal models in our mind to encode cause-effect relations of events and use these to explain why new events happen by referring to counterfactuals — things that did not happen. In this paper, we use causal models to derive causal explanations of the behaviour of model-free reinforcement learning agents. We present an approach that learns a structural causal model during reinforcement learning and encodes causal relationships between variables of interest. This model is then used to generate explanations of behaviour based on counterfactual analysis of the causal model. We computationally evaluate the model in 6 domains and measure performance and task prediction accuracy. We report on a study with 120 participants who observe agents playing a real-time strategy game (Starcraft II) and then receive explanations of the agents' behaviour. We investigate: 1) participants' understanding gained by explanations through task prediction; 2) explanation satisfaction and 3) trust. Our results show that causal model explanations perform better on these measures compared to two other baseline explanation models.


Sign in / Sign up

Export Citation Format

Share Document