Pseudocontingencies

Author(s):  
Klaus Fiedler ◽  
Florian Kutzner

In research on causal inference and in related paradigms (conditioning, cue learning, attribution), it has been traditionally taken for granted that the statistical contingency between cause and effect drives the cognitive inference process. However, while a contingency model implies a cognitive algorithm based on joint frequencies (i.e., the cell frequencies of a 2 x 2 contingency table), recent research on pseudocontingencies (PCs) suggests a different mental algorithm that is driven by base rates (i.e., the marginal frequencies of a 2 x 2 table). When the base rates of two variables are skewed in the same way, a positive contingency is inferred. In contrast, a negative contingency is inferred when base rates are skewed in opposite directions. The chapter describes PCs as a resilient cognitive illusion, as a proxy for inferring contingencies in the absence of solid information, and as a smart heuristic that affords valid inferences most of the time.

1987 ◽  
Vol 21 (4) ◽  
pp. 507-513 ◽  
Author(s):  
Wayne Hall

This paper provides a simplified method for evaluating the evidence in favour of a causal claim. It analyses the evidence bearing upon such a claim in terms of two questions: Do the putative cause and effect covary, and can alternative non-causal explanations of the relationship be ruled out? The different research designs for assessing covariation are outlined, as are the ways in which these designs permit a researcher to decide between alternative explanations of the relationship.


2017 ◽  
Author(s):  
Luigi Acerbi ◽  
Kalpana Dokka ◽  
Dora E. Angelaki ◽  
Wei Ji Ma

AbstractThe precision of multisensory heading perception improves when visual and vestibular cues arising from the same cause, namely motion of the observer through a stationary environment, are integrated. Thus, in order to determine how the cues should be processed, the brain must infer the causal relationship underlying the multisensory cues. In heading perception, however, it is unclear whether observers follow the Bayesian strategy, a simpler non-Bayesian heuristic, or even perform causal inference at all. We developed an efficient and robust computational framework to perform Bayesian model comparison of causal inference strategies, which incorporates a number of alternative assumptions about the observers. With this framework, we investigated whether human observers’ performance in an explicit cause attribution and an implicit heading discrimination task can be modeled as a causal inference process. In the explicit inference task, all subjects accounted for cue disparity when reporting judgments of common cause, although not necessarily all in a Bayesian fashion. By contrast, but in agreement with previous findings, data from the heading discrimination task only could not rule out that several of the same observers were adopting a forced-fusion strategy, whereby cues are integrated regardless of disparity. Only when we combined evidence from both tasks we were able to rule out forced-fusion in the heading discrimination task. Crucially, findings were robust across a number of variants of models and analyses. Our results demonstrate that our proposed computational framework allows researchers to ask complex questions within a rigorous Bayesian framework that accounts for parameter and model uncertainty.


2020 ◽  
Author(s):  
David Sobel

This manuscript examines the relation between preschoolers’ ability to integrate base rates into their causal inferences about objects with their understanding that objects have stable properties that deterministically relate to their causal properties. Three- and 4-year-olds were tested on two measures of causal inference. In the first, children were shown a pattern of ambiguous data that could be resolved by appealing to base rate information. In the second, children’s mechanistic assumptions about the same causal system were tested, specifically to determine if they recognized that an object’s causal efficacy was related to it possessing a stable internal property. Children who possessed this mechanism information were more likely to resolve the ambiguous information by appealing to base rates. The results are discussed in terms of rational models of children’s causal inference.


2014 ◽  
Vol 281 (1786) ◽  
pp. 20140751 ◽  
Author(s):  
Mark T. Elliott ◽  
Alan M. Wing ◽  
Andrew E. Welchman

Many everyday skilled actions depend on moving in time with signals that are embedded in complex auditory streams (e.g. musical performance, dancing or simply holding a conversation). Such behaviour is apparently effortless; however, it is not known how humans combine auditory signals to support movement production and coordination. Here, we test how participants synchronize their movements when there are potentially conflicting auditory targets to guide their actions. Participants tapped their fingers in time with two simultaneously presented metronomes of equal tempo, but differing in phase and temporal regularity. Synchronization therefore depended on integrating the two timing cues into a single-event estimate or treating the cues as independent and thereby selecting one signal over the other. We show that a Bayesian inference process explains the situations in which participants choose to integrate or separate signals, and predicts motor timing errors. Simulations of this causal inference process demonstrate that this model provides a better description of the data than other plausible models. Our findings suggest that humans exploit a Bayesian inference process to control movement timing in situations where the origin of auditory signals needs to be resolved.


2003 ◽  
Vol 56 (6) ◽  
pp. 977-1007 ◽  
Author(s):  
José C. Perales ◽  
David R. Shanks

The power PC theory (Cheng, 1997) is a normative account of causal inference, which predicts that causal judgements are based on the power p of a potential cause, where p is the cause-effect contingency normalized by the base rate of the effect. In three experiments we demonstrate that both cause-effect contingency and effect base-rate independently affect estimates in causal learning tasks. In Experiment 1, causal strength judgements were directly related to power p in a task in which the effect base-rate was manipulated across two positive and two negative contingency conditions. In Experiments 2 and 3 contingency manipulations affected causal estimates in several situations in which power p was held constant, contrary to the power PC theory's predic- tions. This latter effect cannot be explained by participants’ conflation of reliability and causal strength, as Experiment 3 demonstrated independence of causal judgements and confidence. From a descriptive point of view, the data are compatible with Pearce's (1987) model, as well as with several other judgement rules, but not with the Rescorla-Wagner (Rescorla & Wagner, 1972) or power PC models.


2016 ◽  
Vol 40 (8/9) ◽  
pp. 691-718 ◽  
Author(s):  
Elisabeth E. Bennett ◽  
Rochell R. McWhorter

Purpose The purpose of this paper is to explore the role of qualitative research in causality, with particular emphasis on process causality. In one paper, it is not possible to discuss all the issues of causality, but the aim is to provide useful ways of thinking about causality and qualitative research. Specifically, a brief overview of the regularity theory of causation is provided, qualitative research characteristics and ontological and epistemological views that serve as a potential conceptual frame to resolve some tensions between quantitative and qualitative work are discussed and causal processes are explored. This paper offers a definition and a model of process causality and then presents findings from an exploratory study that advanced the discussion beyond the conceptual frame. Design/methodology/approach This paper first conceptually frames process causality within qualitative research and then discusses results from an exploratory study that involved reviewing literature and interviewing expert researchers. The exploratory study conducted involved analyzing multiple years of literature in two top human resource development (HRD) journals and also exploratory expert interviews. The study was guided by the research question: How might qualitative research inform causal inferences in HRD? This study used a basic qualitative approach that sought insight through inductive analysis within the focus of this study. Findings The exploratory study found that triangulation, context, thick description and process research questions are important elements of qualitative studies that can improve research that involves causal relationships. Specifically, qualitative studies provide both depth of data collection and descriptive write-up that provide clues to cause-and-effect relationships that support or refute theory. Research limitations/implications A major conclusion of this study is that qualitative research plays a critical role in causal inference, albeit an understated one, when one takes an enlarged philosophical view of causality. Equating causality solely with variance theory associated with quantitative research leaves causal processes locked in a metaphoric black box between cause and effect, whereas qualitative research opens up the processes and mechanisms contained within the box. Originality/value This paper reframed the discussion about causality to include both the logic of quantitative studies and qualitative studies to demonstrate a more holistic view of causality and to demonstrate the value of qualitative research for causal inference. Process causality in qualitative research is added to the mix of techniques and theories found in the larger discussion of causality in HRD.


2021 ◽  
Author(s):  
Arturo Rodriguez ◽  
Jose Terrazas ◽  
Richard Adansi ◽  
V. M. Krushnarao Kotteda ◽  
Jorge A. Munoz ◽  
...  

Abstract Understanding the transition from laminar to turbulent flow – Boundary-Layer Transition (BLT), we can design better state-of-the-art vehicles for defense and space applications, which can mitigate the limitations in current high-speed temperature conditions. BLT is a subject of fluid flow disturbances created by geometric parameters and flow conditions, such as surface roughness, increased velocity, and high-pressure fluctuations, to name a few. These disturbances lead to the development of turbulent spots and differential heating. Historically, the Reynolds number has been used to predict whether a system will develop turbulent flow. However, it has been known for decades that it is not always reliable and cannot indicate where the BLT will occur: some experiments present scenarios where the flow is laminar at a high Reynolds number and vice versa. We can predict the BLT from performing physical experiments, but they are expensive and physical configurations are limited. Despite many community efforts and successes, no general computational solution to simulate different flows and vehicle types that fully incorporate BLT exists. Many are a considerable number of parameters that affect BLT. Therefore, we use Causal Inference to predict BLT by cause-and-effect analysis on multivariate data obtained from BLT studies. Data generated using high-fidelity Computational Fluid Dynamics (CFD) with resolved Large-Eddy Simulations (LES) scales, will be analyzed for turbulence intensity by decomposing velocity in mean and fluctuations. In this paper, we will be discussing approaches on how we predict BLT scenarios using cause and effect relationships driven by causal inference analysis.


2018 ◽  
Author(s):  
Santiago Herce Castañón ◽  
Dan Bang ◽  
Rani Moran ◽  
Jacqueline Ding ◽  
Tobias Egner ◽  
...  

AbstractHumans typically make near-optimal sensorimotor judgments but show systematic biases when making more cognitive judgments. Here we test the hypothesis that, while humans are sensitive to the noise present during early sensory processing, the “optimality gap” arises because they are blind to noise introduced by later cognitive integration of variable or discordant pieces of information. In six psychophysical experiments, human observers judged the average orientation of an array of contrast gratings. We varied the stimulus contrast (encoding noise) and orientation variability (integration noise) of the array. Participants adapted near-optimally to changes in encoding noise, but, under increased integration noise, displayed a range of suboptimal behaviours: they ignored stimulus base rates, reported excessive confidence in their choices, and refrained from opting out of objectively difficult trials. These overconfident behaviours were captured by a Bayesian model which is blind to integration noise. Our study provides a computationally grounded explanation of suboptimal cognitive inferences.


2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Son Phuc Nguyen ◽  

Causal inference has been of interest in economics for many decades with a great deal of notable work like the Granger’s causality which directly lead to a Nobel Prize in Economics. The question of cause and effect is of paramount importance in making high-stake decisions such as economic policies. Besides, in the last ten years, causal inference in artificial intelligence has gradually become a mainstream with remarkable work such as the do-calculus by Judea Pearl. In this paper, we would like to discuss some fundamental ideas in causal inference.


2019 ◽  
Vol 42 ◽  
Author(s):  
Roberto A. Gulli

Abstract The long-enduring coding metaphor is deemed problematic because it imbues correlational evidence with causal power. In neuroscience, most research is correlational or conditionally correlational; this research, in aggregate, informs causal inference. Rather than prescribing semantics used in correlational studies, it would be useful for neuroscientists to focus on a constructive syntax to guide principled causal inference.


Sign in / Sign up

Export Citation Format

Share Document