Conservatism revisited: Base rates, prior probabilities, and averaging strategies

1996 ◽  
Vol 19 (1) ◽  
pp. 36-37
Author(s):  
Nancy Paule Melone ◽  
Timothy W. McGuire

AbstractConsistent with Koehler's position, we propose a generalization of the base rate fallacy and earlier conservatism literatures. In studies using both traditional tasks and new tasks based on ecologically valid base rates, our subjects typically underweight individuating information at least as much as they underweight base rates. The implications of cue consistency for averaging heuristics are discussed.

1996 ◽  
Vol 19 (1) ◽  
pp. 33-34 ◽  
Author(s):  
Clark McCauley

AbstractThe fallacy beneath the base rate fallacy is that we know what a base rate is. We talk as if base rates and individuating information were two different kinds of information. From a Bayesian perspective, however, the only difference between base rate and individuating information is – which comes first.


1996 ◽  
Vol 19 (1) ◽  
pp. 1-17 ◽  
Author(s):  
Jonathan J. Koehler

AbstractWe have been oversold on the base rate fallacy in probabilistic judgment from an empirical, normative, and methodological standpoint. At the empirical level, a thorough examination of the base rate literature (including the famous lawyer–engineer problem) does not support the conventional wisdom that people routinely ignore base rates. Quite the contrary, the literature shows that base rates are almost always used and that their degree of use depends on task structure and representation. Specifically, base rates play a relatively larger role in tasks where base rates are implicitly learned or can be represented in frequentist terms. Base rates are also used more when they are reliable and relatively more diagnostic than available individuating information. At the normative level, the base rate fallacy should be rejected because few tasks map unambiguously into the narrow framework that is held up as the standard of good decision making. Mechanical applications of Bayes's theorem to identify performance errors are inappropriate when (1) key assumptions of the model are either unchecked or grossly violated, and (2) no attempt is made to identify the decision maker's goals, values, and task assumptions. Methodologically, the current approach is criticized for its failure to consider how the ambiguous, unreliable, and unstable base rates of the real world are and should be used. Where decision makers' assumptions and goals vary, and where performance criteria are complex, the traditional Bayesian standard is insufficient. Even where predictive accuracy is the goal in commonly defined problems, there may be situations (e.g., informationally redundant environments) in which base rates can be ignored with impunity. A more ecologically valid research program is called for. This program should emphasize the development of prescriptive theory in rich, realistic decision environments.


1997 ◽  
Vol 20 (4) ◽  
pp. 774-775 ◽  
Author(s):  
Jonathan E. Adler

In many base rate studies, a judgment is required for which the base rates are relevant, and subjects do not use them. It is inferred that the base rates are ignored; I question this inference. Second, I argue that the base rate fallacy is not less significant for what it reveals about human reasoning, if it occurs less frequently than has been alleged.


1996 ◽  
Vol 19 (1) ◽  
pp. 26-26 ◽  
Author(s):  
Gideon Keren ◽  
Lambert J. Thijs

AbstractSetting the two hypotheses of complete neglect and full use of base rates against each other is inappropriate. The proper question concerns the degree to which base rates are used (or neglected), and under what conditions. We outline alternative approaches and recommend regression analysis. Koehler's conclusion that we have been oversold on the base rate fallacy seems to be premature.


2017 ◽  
Author(s):  
Ariel Zylberberg ◽  
Daniel M Wolpert ◽  
Michael N Shadlen

SummaryAccurate decisions require knowledge of prior probabilities (e.g., prevalence or base rate) but it is unclear how prior probability is learned in the absence of a teacher. We hypothesized that humans could learn base rates from experience making decisions, even without feedback. Participants made difficult decisions about the direction of dynamic random dot motion. For each block of 15-42 trials, the base rate favored left or right by a different amount. Participants were not informed of the base rate, yet they gradually biased their choices and thereby increased accuracy and confidence in their decisions. They achieved this by updating knowledge of base rate after each decision, using a counterfactual representation of confidence that simulates a neutral prior. The strategy is consistent with Bayesian updating of belief and suggests that humans represent both true confidence, that incorporates the evolving belief of the prior, and counterfactual confidence that discounts the prior.


1996 ◽  
Vol 19 (1) ◽  
pp. 19-20
Author(s):  
Terry Connolly

AbstractThe base rate fallacy is directly dependent on a particular judgment paradigm in which information may be unambiguously designated as either “base rate” or “individuating,” and in which subjects make two-stage sequential judgments. The paradigm may be a poor match for real world settings, and the fallacy may thus be undefined for natural ecologies of judgment.


1996 ◽  
Vol 19 (1) ◽  
pp. 29-30
Author(s):  
Lisa Koonce

AbstractKoehler's call for a reanalysis of the base rate fallacy is particularly important in the applied domain of accounting, since base rate data appear to be an important input for many accounting tasks. In this commentary I discuss the use of base rates in accounting and explain why more flexible standards of performance are important when judging the use of base rates.


1996 ◽  
Vol 19 (1) ◽  
pp. 21-22 ◽  
Author(s):  
Pablo Fernandez-Berrocal ◽  
Julian Almaraz ◽  
Susana Segura

Abstract(1) There is enough contradictory evidence regarding the role of base rates in category learning to confirm the nonexistence of biases in such learning. (2) It is not always possible to activate statistical reasoning through frequentist representation. (3) It is necessary to use the concept of systematic processing in reconsidering the published work on biases.


1996 ◽  
Vol 19 (1) ◽  
pp. 41-53
Author(s):  
Jonathan J. Koehler

AbstractCommentators agree that simple conclusions about a general base rate fallacy are not appropriate. It is more constructive to identify conditions under which base rates are differentially weighted. Commentators also agree that improving the ecological validity of the research is desirable, although this is less important to those interested exclusively in psychological processes. The philosophers and ecologists among the commentators offer a kinder perspective on base rate reasoning than the psychologists. My own perspective is that the interesting questions (both psychological and normative/prescriptive) are best answered through empirical studies that use real-world performance standards.


Sign in / Sign up

Export Citation Format

Share Document