Young children’s sensitivity to priors in causal inference reflects their mechanistic knowledge

2020 ◽  
Author(s):  
David Sobel

This manuscript examines the relation between preschoolers’ ability to integrate base rates into their causal inferences about objects with their understanding that objects have stable properties that deterministically relate to their causal properties. Three- and 4-year-olds were tested on two measures of causal inference. In the first, children were shown a pattern of ambiguous data that could be resolved by appealing to base rate information. In the second, children’s mechanistic assumptions about the same causal system were tested, specifically to determine if they recognized that an object’s causal efficacy was related to it possessing a stable internal property. Children who possessed this mechanism information were more likely to resolve the ambiguous information by appealing to base rates. The results are discussed in terms of rational models of children’s causal inference.

2010 ◽  
Vol 13 (05) ◽  
pp. 607-619 ◽  
Author(s):  
DIEMO URBIG

Previous research investigating base rate neglect as a bias in human information processing has focused on isolated individuals. This study complements this research by showing that in settings of interacting individuals, especially in settings of social learning, where individuals can learn from one another, base rate neglect can increase a population's welfare. This study further supports the research arguing that a population with members biased by neglecting base rates does not need to perform worse than a population with unbiased members. Adapting the model of social learning suggested by Bikhchandani, Hirshleifer and Welch (The Journal of Political Economy100 (1992) 992–1026) and including base rates that differ from generic cases such as 50–50, conditions are identified that make underweighting base rate information increasing the population's welfare. The base rate neglect can start a social learning process that otherwise had not been started and thus base rate neglect can generate positive externalities improving a population's welfare.


Author(s):  
Yingxu Wang

Human thought, perception, reasoning, and problem solving are highly dependent on causal inferences. This paper presents a set of cognitive models for causation analyses and causal inferences. The taxonomy and mathematical models of causations are created. The framework and properties of causal inferences are elaborated. Methodologies for uncertain causal inferences are discussed. The theoretical foundation of humor and jokes as false causality is revealed. The formalization of causal inference methodologies enables machines to mimic complex human reasoning mechanisms in cognitive informatics, cognitive computing, and computational intelligence.


2019 ◽  
Vol 38 (5) ◽  
pp. 539-550
Author(s):  
Ash Puttaswamy ◽  
Anjelica Barone ◽  
Kathleen D. Viezel ◽  
John O. Willis ◽  
Ron Dumont

An area of particular importance when examining index scores on the Wechsler Intelligence Scale for Children–Fifth Edition (WISC-V) is the utilization and interpretation of critical values and base rates associated with differences between an individual’s subtest scaled score and the individual’s mean scaled score for an index. For the WISC-V, critical value and base rate information is provided for the core subtests contained within each of the primary indices. However, critical value and base rate information is not provided by the test publisher for subtests within the Quantitative Reasoning Index (QRI), Auditory Working Memory Index (AWMI), Nonverbal Index (NVI), General Ability Index (GAI), Cognitive Proficiency Index (CPI), Naming Speed Index (NSI), Symbol Translation Index (STI), and Storage and Retrieval Index (SRI). This study investigates and provides critical values and base rates for performance on the QRI, AWMI, NVI, GAI, CPI, NSI, STI, and SRI.


2007 ◽  
Vol 15 (3) ◽  
pp. 199-236 ◽  
Author(s):  
Daniel E. Ho ◽  
Kosuke Imai ◽  
Gary King ◽  
Elizabeth A. Stuart

Although published works rarely include causal estimates from more than a few model specifications, authors usually choose the presented estimates from numerous trial runs readers never see. Given the often large variation in estimates across choices of control variables, functional forms, and other modeling assumptions, how can researchers ensure that the few estimates presented are accurate or representative? How do readers know that publications are not merely demonstrations that it ispossibleto find a specification that fits the author's favorite hypothesis? And how do we evaluate or even define statistical properties like unbiasedness or mean squared error when no unique model or estimator even exists? Matching methods, which offer the promise of causal inference with fewer assumptions, constitute one possible way forward, but crucial results in this fast-growing methodological literature are often grossly misinterpreted. We explain how to avoid these misinterpretations and propose a unified approach that makes it possible for researchers to preprocess data with matching (such as with the easy-to-use software we offer) and then to apply the best parametric techniques they would have used anyway. This procedure makes parametric models produce more accurate and considerably less model-dependent causal inferences.


Author(s):  
Peter Hedström

This article emphasizes various ways by which the study of mechanisms can make quantitative research more useful for causal inference. It concentrates on three aspects of the role of mechanisms in causal and statistical inference: how an understanding of the mechanisms at work can improve statistical inference by guiding the specification of the statistical models to be estimated; how mechanisms can strengthen causal inferences by improving our understanding of why individuals do what they do; and how mechanism-based models can strengthen causal inferences by showing why, acting as they do, individuals bring about the social outcomes they do. There has been a surge of interest in mechanism-based explanations, in political science as well as in sociology. Most of this work has been vital and valuable in that it has sought to clarify the distinctiveness of the approach and to apply it empirically.


2018 ◽  
Author(s):  
Stephanie Denison ◽  
Elizabeth Bonawitz ◽  
Alison Gopnik ◽  
Tom Griffiths

We present a proposal—“The Sampling Hypothesis”—suggesting that the variability in young children’s responses may be part of a rational strategy for inductive inference. In particular, we argue that young learners may be randomly sampling from the set of possible hypotheses that explain the observed data, producing different hypotheses with frequencies that reflect their subjective probability. We test the Sampling Hypothesis with four experiments on four- and five-year-olds. In these experiments, children saw a distribution of colored blocks and an event involving one of these blocks. In the first experiment, one block fell randomly and invisibly into a machine, and children made multiple guesses about the color of the block, either immediately or after a one-week delay. The distribution of guesses was consistent with the distribution of block colors, and the dependence between guesses decreased as a function of the time between guesses. In Experiments 2 and 3 the probability of different colors was systematically varied by condition. Preschoolers’ guesses tracked the probabilities of the colors, as should be the case if they are sampling from the set of possible explanatory hypotheses. Experiment 4 used a more complicated two-step process to randomly select a block and found that the distribution of children’s guesses matched the probabilities resulting from this process rather than the overall frequency of different colors. This suggests that the children’s probability matching reflects sophisticated probabilistic inferences and is not merely the result of a naïve tabulation of frequencies. Taken together the four experiments provide support for the Sampling Hypothesis, and the idea that there may be a rational explanation for the variability of children’s responses in domains like causal inference.


2021 ◽  
Author(s):  
Wen Wei Loh ◽  
Dongning Ren

Mediation analysis is an essential tool for investigating how a treatment causally affects an outcome via intermediate variables. However, violations of the (often implicit) causal assumptions can severely threaten the validity of causal inferences of mediation analysis. Psychologists have recently started to raise such concerns, but the discussions have been limited to mediation analysis with a single mediator. In this article, we examine the causal assumptions when there are multiple possible mediators. We pay particular attention to the practice of exploring mediated effects along various paths linking several mediators. Substantive conclusions using such methods are predicated on stringent assumptions about the underlying causal structure that can be indefensible in practice. Therefore, we recommend that researchers shift focus to mediator-specific indirect effects using a recently proposed framework of interventional (in)direct effects. A vital benefit of this approach is that valid causal inference of mediation analysis with multiple mediators does not necessitate correctly assuming the underlying causal structure among the mediators. Finally, we provide a practical guide with suggestions to improve the research practice of mediation analysis at each study stage. We hope this article will encourage explication, justification, and reflection of the causal assumptions underpinning mediation analysis to improve the validity of causal inferences in psychology research.


2018 ◽  
Vol 5 (2) ◽  
Author(s):  
Benjamin A. Motz ◽  
Paulo F. Carvalho ◽  
Joshua R. De Leeuw ◽  
Robert L. Goldstone

To identify the ways teachers and educational systems can improve learning, researchers need to make causal inferences. Analyses of existing datasets play an important role in detecting causal patterns, but conducting experiments also plays an indispensable role in this research. In this article, we advocate for experiments to be embedded in real educational contexts, allowing researchers to test whether interventions such as a learning activity, new technology, or advising strategy elicit reliable improvements in authentic student behaviours and educational outcomes. Embedded experiments, wherein theoretically relevant variables are systematically manipulated in real learning contexts, carry strong benefits for making causal inferences, particularly when allied with the data-rich resources of contemporary e-learning environments. Toward this goal, we offer a field guide to embedded experimentation, reviewing experimental design choices, addressing ethical concerns, discussing the importance of involving teachers, and reviewing how interventions can be deployed in a variety of contexts, at a range of scales. Causal inference is a critical component of a field that aims to improve student learning; including experimentation alongside analyses of existing data in learning analytics is the most compelling way to test causal claims.


2007 ◽  
Vol 30 (3) ◽  
pp. 262-263
Author(s):  
Edmund Fantino ◽  
Stephanie Stolarz-Fantino

AbstractWe present evidence supporting the target article's assertion that while the presentation of base-rate information in a natural frequency format can be helpful in enhancing sensitivity to base rates, method of presentation is not a panacea. Indeed, we review studies demonstrating that when subjects directly experience base rates as natural frequencies in a trial-by-trial setting, they evince large base-rate neglect.


Sign in / Sign up

Export Citation Format

Share Document