scholarly journals Comparing Bayesian and non-Bayesian accounts of human confidence reports

2016 ◽  
Author(s):  
William T. Adler ◽  
Wei Ji Ma

Humans can meaningfully report their confidence in a perceptual or cognitive decision. It is widely believed that these reports reflect the Bayesian probability that the decision is correct, but this hypothesis has not been rigorously tested against non-Bayesian alternatives. We use two perceptual categorization tasks in which Bayesian confidence reporting requires subjects to take sensory uncertainty into account in a specific way. We find that subjects do take sensory uncertainty into account when reporting confidence, suggesting that brain areas involved in reporting confidence can access low-level representations of sensory uncertainty. However, behavior is not fully consistent with the Bayesian hypothesis and is better described by simple heuristic models. Both conclusions are robust to changes in the uncertainty manipulation, task, response modality, model comparison metric, and additional flexibility in the Bayesian model. Our results suggest that adhering to a rational account of confidence behavior may require incorporating implementational constraints.

Author(s):  
Timothy McGrew

One of the central complaints about Bayesian probability is that it places no constraints on individual subjectivity in one’s initial probability assignments. Those sympathetic to Bayesian methods have responded by adding restrictions motivated by broader epistemic concerns about the possibility of changing one’s mind. This chapter explores some cases where, intuitively, a straightforward Bayesian model yields unreasonable results. Problems arise in these cases not because there is something wrong with the Bayesian formalism per se but because standard textbook illustrations teach us to represent our inferences in simplified ways that break down in extreme cases. It also explores some interesting limitations on the extent to which successive items of evidence ought to induce us to change our minds when certain screening conditions obtain.


2014 ◽  
pp. 101-117
Author(s):  
Michael D. Lee ◽  
Eric-Jan Wagenmakers

2018 ◽  
Vol 265 ◽  
pp. 271-278 ◽  
Author(s):  
Tyler B. Grove ◽  
Beier Yao ◽  
Savanna A. Mueller ◽  
Merranda McLaughlin ◽  
Vicki L. Ellingrod ◽  
...  

2021 ◽  
Author(s):  
John K. Kruschke

In most applications of Bayesian model comparison or Bayesian hypothesis testing, the results are reported in terms of the Bayes factor only, not in terms of the posterior probabilities of the models. Posterior model probabilities are not reported because researchers are reluctant to declare prior model probabilities, which in turn stems from uncertainty in the prior. Fortunately, Bayesian formalisms are designed to embrace prior uncertainty, not ignore it. This article provides a novel derivation of the posterior distribution of model probability, and shows many examples. The posterior distribution is useful for making decisions taking into account the uncertainty of the posterior model probability. Benchmark Bayes factors are provided for a spectrum of priors on model probability. R code is posted at https://osf.io/36527/. This framework and tools will improve interpretation and usefulness of Bayes factors in all their applications.


2017 ◽  
Vol 70 ◽  
pp. 84-93 ◽  
Author(s):  
R. Wesley Henderson ◽  
Paul M. Goggans ◽  
Lei Cao

2018 ◽  
Vol 30 (12) ◽  
pp. 3327-3354 ◽  
Author(s):  
William T. Adler ◽  
Wei Ji Ma

The Bayesian model of confidence posits that confidence reflects the observer's posterior probability that the decision is correct. Hangya, Sanders, and Kepecs ( 2016 ) have proposed that researchers can test the Bayesian model by deriving qualitative signatures of Bayesian confidence (i.e., patterns that one would expect to see if an observer were Bayesian) and looking for those signatures in human or animal data. We examine two proposed signatures, showing that their derivations contain hidden assumptions that limit their applicability and that they are neither necessary nor sufficient conditions for Bayesian confidence. One signature is an average confidence of 0.75 on trials with neutral evidence. This signature holds only when class-conditioned stimulus distributions do not overlap and when internal noise is very low. Another signature is that as stimulus magnitude increases, confidence increases on correct trials but decreases on incorrect trials. This divergence signature holds only when stimulus distributions do not overlap or when noise is high. Navajas et al. ( 2017 ) have proposed an alternative form of this signature; we find no indication that this alternative form is expected under Bayesian confidence. Our observations give us pause about the usefulness of the qualitative signatures of Bayesian confidence. To determine the nature of the computations underlying confidence reports, there may be no shortcut to quantitative model comparison.


2018 ◽  
Author(s):  
Julia M. Haaf ◽  
Fayette Klaassen ◽  
Jeffrey Rouder

Most theories in the social sciences are verbal and provide ordinal-level predictions for data. For example, a theory might predict that performance is better in one condition than another, but not by how much. One way of gaining additional specificity is to posit many ordinal constraints that hold simultaneously. For example a theory might predict an effect in one condition, a larger effect in another, and none in a third. We show how common theoretical positions naturally lead to multiple ordinal constraints. To assess whether multiple ordinal constraints hold in data, we adopt a Bayesian model comparison approach. The result is an inferential system that is custom-tuned for the way social scientists conceptualize theory, and that is more intuitive and informative than current linear-model approaches.


Sign in / Sign up

Export Citation Format

Share Document