scholarly journals On the generality and cognitive basis of base-rate neglect

2021 ◽  
Author(s):  
Elina Stengård ◽  
Peter Juslin ◽  
Ulrike Hahn ◽  
Ronald van den Berg

ABSTRACTBase rate neglect refers to people’s apparent tendency to underweight or even ignore base rate information when estimating posterior probabilities for events, such as the probability that a person with a positive cancer-test outcome actually does have cancer. While many studies have replicated the effect, there has been little variation in the structure of the reasoning problems used in those studies. In particular, most experiments have used extremely low base rates, high hit rates, and low false alarm rates. As a result, it is unclear whether the effect is a general phenomenon in human probabilistic reasoning or an anomaly that applies only to a small subset of reasoning problems. Moreover, previous studies have focused on describing empirical patterns of the effect and not so much on the underlying strategies. Here, we address these limitations by testing participants on a broader problem space and modelling their response at a single-participant level. We find that the empirical patterns that have served as evidence for base-rate neglect generalize to the larger problem space. At the level of individuals, we find evidence for large variability in how sensitive participants are to base rates, but with two distinct groups: those who largely ignore base rates and those who almost perfectly account for it. This heterogeneity is reflected in the cognitive modeling results, which reveal that there is not a single strategy that best captures the data for all participants. The overall best model is a variant of the Bayesian model with too conservative priors, tightly followed by a linear-additive integration model. Surprisingly, we find very little evidence for earlier proposed heuristic models. Altogether, our results suggest that the effect known as “base-rate neglect” generalizes to a large set of reasoning problems, but may need a reinterpretation in terms of the underlying cognitive mechanisms.

2010 ◽  
Vol 13 (05) ◽  
pp. 607-619 ◽  
Author(s):  
DIEMO URBIG

Previous research investigating base rate neglect as a bias in human information processing has focused on isolated individuals. This study complements this research by showing that in settings of interacting individuals, especially in settings of social learning, where individuals can learn from one another, base rate neglect can increase a population's welfare. This study further supports the research arguing that a population with members biased by neglecting base rates does not need to perform worse than a population with unbiased members. Adapting the model of social learning suggested by Bikhchandani, Hirshleifer and Welch (The Journal of Political Economy100 (1992) 992–1026) and including base rates that differ from generic cases such as 50–50, conditions are identified that make underweighting base rate information increasing the population's welfare. The base rate neglect can start a social learning process that otherwise had not been started and thus base rate neglect can generate positive externalities improving a population's welfare.


2008 ◽  
Vol 90 (1) ◽  
pp. 23-32 ◽  
Author(s):  
Florian Kutzner ◽  
Peter Freytag ◽  
Tobias Vogel ◽  
Klaus Fiedler

2007 ◽  
Vol 30 (3) ◽  
pp. 262-263
Author(s):  
Edmund Fantino ◽  
Stephanie Stolarz-Fantino

AbstractWe present evidence supporting the target article's assertion that while the presentation of base-rate information in a natural frequency format can be helpful in enhancing sensitivity to base rates, method of presentation is not a panacea. Indeed, we review studies demonstrating that when subjects directly experience base rates as natural frequencies in a trial-by-trial setting, they evince large base-rate neglect.


Author(s):  
Daniel Link ◽  
Markus Raab

AbstractHuman behavior is often assumed to be irrational, full of errors, and affected by cognitive biases. One of these biases is base-rate neglect, which happens when the base rates of a specific category are not considered when making decisions. We argue here that while naïve subjects demonstrate base-rate neglect in laboratory conditions, experts tested in the real world do use base rates. Our explanation is that lab studies use single questions, whereas, in the real world, most decisions are sequential in nature, leading to a more realistic test of base-rate use. One decision that lends itself to testing base-rate use in real life occurs in beach volleyball—specifically, deciding to whom to serve to win the game. Analyzing the sequential choices in expert athletes in more than 1,300 games revealed that they were sensitive to base rates and adapted their decision strategies to the performance of the opponent. Our data describes a threshold at which players change their strategy and use base rates. We conclude that the debate over whether decision makers use base rates should be shifted to real-world tests, and the focus should be on when and how base rates are used.


1996 ◽  
Vol 19 (1) ◽  
pp. 25-26 ◽  
Author(s):  
Robert M. Hamm

AbstractA recent study showed physicians' reasoning about a realistic case to be ignorant of base rate. It also showed physicians interpreting information pertinent to base rate differently, depending on whether it was presented early or late in the case. Although these adult reasoners might do better if given hints through talk of relative frequencies, this would not prove that they had no problem of base rate neglect.


2007 ◽  
Vol 30 (3) ◽  
pp. 272-274 ◽  
Author(s):  
Donald Laming

AbstractHuman responses to probabilities can be studied through gambling and through experiments presenting biased sequences of stimuli. In both cases, participants are sensitive to base rates. They adjust automatically to changes in base rate; such adjustment is incompatible with conformity to Bayes' Theorem. ”Base-rate neglect” is therefore specific to the exercises in mental arithmetic reviewed in the target article.


2021 ◽  
Author(s):  
Piers Howe ◽  
Andrew Perfors ◽  
Bradley Walker ◽  
Yoshihisa Kashima ◽  
Nicolas Fay

Bayesian statistics offers a normative description for how a person should combine their original beliefs (i.e., their priors) in light of new evidence (i.e., the likelihood). Previous research suggests that people tend to under-weight both their prior (base rate neglect) and the likelihood (conservatism), although this varies by individual and situation. Yet this work generally elicits people's knowledge as single point estimates (e.g., x has 5% probability of occurring) rather than as a full distribution. Here we demonstrate the utility of eliciting and fitting full distributions when studying these questions. Across three experiments, we found substantial variation in the extent to which people showed base rate neglect and conservatism, which our method allowed us to measure for the first time simultaneously at the level of the individual. We found that while most people tended to disregard the base rate, they did so less when the prior was made explicit. Although many individuals were conservative, there was no apparent systematic relationship between base rate neglect and conservatism within individuals. We suggest that this method shows great potential for studying human probabilistic reasoning.


2019 ◽  
Author(s):  
Ryther Anderson ◽  
Achay Biong ◽  
Diego Gómez-Gualdrón

<div>Tailoring the structure and chemistry of metal-organic frameworks (MOFs) enables the manipulation of their adsorption properties to suit specific energy and environmental applications. As there are millions of possible MOFs (with tens of thousands already synthesized), molecular simulation, such as grand canonical Monte Carlo (GCMC), has frequently been used to rapidly evaluate the adsorption performance of a large set of MOFs. This allows subsequent experiments to focus only on a small subset of the most promising MOFs. In many instances, however, even molecular simulation becomes prohibitively time consuming, underscoring the need for alternative screening methods, such as machine learning, to precede molecular simulation efforts. In this study, as a proof of concept, we trained a neural network as the first example of a machine learning model capable of predicting full adsorption isotherms of different molecules not included in the training of the model. To achieve this, we trained our neural network only on alchemical species, represented only by their geometry and force field parameters, and used this neural network to predict the loadings of real adsorbates. We focused on predicting room temperature adsorption of small (one- and two-atom) molecules relevant to chemical separations. Namely, argon, krypton, xenon, methane, ethane, and nitrogen. However, we also observed surprisingly promising predictions for more complex molecules, whose properties are outside the range spanned by the alchemical adsorbates. Prediction accuracies suitable for large-scale screening were achieved using simple MOF (e.g. geometric properties and chemical moieties), and adsorbate (e.g. forcefield parameters and geometry) descriptors. Our results illustrate a new philosophy of training that opens the path towards development of machine learning models that can predict the adsorption loading of any new adsorbate at any new operating conditions in any new MOF.</div>


Sign in / Sign up

Export Citation Format

Share Document