scholarly journals A quantitative confidence signal detection model: 1. Fitting psychometric functions

2016 ◽  
Vol 115 (4) ◽  
pp. 1932-1945 ◽  
Author(s):  
Yongwoo Yi ◽  
Daniel M. Merfeld

Perceptual thresholds are commonly assayed in the laboratory and clinic. When precision and accuracy are required, thresholds are quantified by fitting a psychometric function to forced-choice data. The primary shortcoming of this approach is that it typically requires 100 trials or more to yield accurate (i.e., small bias) and precise (i.e., small variance) psychometric parameter estimates. We show that confidence probability judgments combined with a model of confidence can yield psychometric parameter estimates that are markedly more precise and/or markedly more efficient than conventional methods. Specifically, both human data and simulations show that including confidence probability judgments for just 20 trials can yield psychometric parameter estimates that match the precision of those obtained from 100 trials using conventional analyses. Such an efficiency advantage would be especially beneficial for tasks (e.g., taste, smell, and vestibular assays) that require more than a few seconds for each trial, but this potential benefit could accrue for many other tasks.

2019 ◽  
Vol 122 (3) ◽  
pp. 904-921
Author(s):  
Yongwoo Yi ◽  
Wei Wang ◽  
Daniel M. Merfeld

Decision making is a fundamental subfield within neuroscience. While recent findings have yielded major advances in our understanding of decision making, confidence in such decisions remains poorly understood. In this paper, we present a confidence signal detection (CSD) model that combines a standard signal detection model yielding a noisy decision variable with a model of confidence. The CSD model requires quantitative measures of confidence obtained by recording confidence probability judgments. Specifically, we model confidence probability judgments for binary direction recognition (e.g., did I move left or right) decisions. We use our CSD model to study both confidence calibration (i.e., how does confidence compare with performance) and the distributions of confidence probability judgments. We evaluate two variants of our CSD model: a conventional model with two free parameters (CSD2) that assumes that confidence is well calibrated and our new model with three free parameters (CSD3) that includes an additional confidence scaling factor. On average, our CSD2 and CSD3 models explain 73 and 82%, respectively, of the variance found in our empirical data set. Furthermore, for our large data sets consisting of 3,600 trials per subject, correlation and residual analyses suggest that the CSD3 model better explains the predominant aspects of the empirical data than the CSD2 model, especially for subjects whose confidence is not well calibrated. Moreover, simulations show that asymmetric confidence distributions can lead traditional confidence calibration analyses to suggest “underconfidence” even when confidence is perfectly calibrated. These findings show that this CSD model can be used to help improve our understanding of confidence and decision making. NEW & NOTEWORTHY We make life-or-death decisions each day; our actions depend on our “confidence.” Though confidence, accuracy, and response time are the three pillars of decision making, we know little about confidence. In a previous paper, we presented a new model — dependent on a single scaling parameter — that transforms decision variables to confidence. Here we show that this model explains the empirical human confidence distributions obtained during a vestibular direction recognition task better than standard signal detection models.


2007 ◽  
Vol 215 (1) ◽  
pp. 61-71 ◽  
Author(s):  
Edgar Erdfelder ◽  
Lutz Cüpper ◽  
Tina-Sarah Auer ◽  
Monika Undorf

Abstract. A memory measurement model is presented that accounts for judgments of remembering, knowing, and guessing in old-new recognition tasks by assuming four disjoint latent memory states: recollection, familiarity, uncertainty, and rejection. This four-states model can be applied to both Tulving's (1985) remember-know procedure (RK version) and Gardiner and coworker's ( Gardiner, Java, & Richardson-Klavehn, 1996 ; Gardiner, Richardson-Klavehn, & Ramponi, 1997 ) remember-know-guess procedure (RKG version). It is shown that the RK version of the model fits remember-know data approximately as well as the one-dimensional signal detection model does. In contrast, the RKG version of the four-states model outperforms the corresponding detection model even if unequal variances for old and new items are allowed for.We show empirically that the two versions of the four-statesmodelmeasure the same state probabilities. However, the RKG version, requiring remember-know-guess judgments, provides parameter estimates with smaller standard errors and is therefore recommended for routine use.


Memory ◽  
2006 ◽  
Vol 14 (6) ◽  
pp. 655-671 ◽  
Author(s):  
James Michael Lampinen ◽  
Kristina N. Watkins ◽  
Timothy N. Odegard

1976 ◽  
Vol 42 (1) ◽  
pp. 75-85 ◽  
Author(s):  
Michael D. Biderman ◽  
William D. McBrayer ◽  
Mary La Montagne

The effects of responses of another person or a computer occurring prior to the subjects' responses in tasks of recognition of auditory intensity were interpreted in terms of a signal-detection model which assumed that subjects shifted their decision criteria temporarily on each trial. A parameter representing the amount of criterion shift reliably estimated sensitivity to social influence. When the social sensitivity parameter was estimated from the data, discriminative ability, defined as d', was unaffected by the presence of social influence. Principal components analyses suggested that social sensitivity and discriminative ability represented essentially orthogonal components of subjects' decision behavior.


2020 ◽  
Vol 73 (8) ◽  
pp. 1242-1260
Author(s):  
Rory W Spanton ◽  
Christopher J Berry

Despite the unequal variance signal-detection (UVSD) model’s prominence as a model of recognition memory, a psychological explanation for the unequal variance assumption has yet to be verified. According to the encoding variability hypothesis, old item memory strength variance (σo) is greater than that of new items because items are incremented by variable, rather than fixed, amounts of strength at encoding. Conditions that increase encoding variability should therefore result in greater estimates of σo. We conducted three experiments to test this prediction. In Experiment 1, encoding variability was manipulated by presenting items for a fixed or variable (normally distributed) duration at study. In Experiment 2, we used an attentional manipulation whereby participants studied items while performing an auditory one-back task in which distractors were presented at fixed or variable intervals. In Experiment 3, participants studied stimuli with either high or low variance in word frequency. Across experiments, estimates of σo were unaffected by our attempts to manipulate encoding variability, even though the manipulations weakly affected subsequent recognition. Instead, estimates of σo tended to be positively correlated with estimates of the mean difference in strength between new and studied items ( d), as might be expected if σo generally scales with d. Our results show that it is surprisingly hard to successfully manipulate encoding variability, and they provide a signpost for others seeking to test the encoding variability hypothesis.


Author(s):  
Jackson Duncan-Reid ◽  
Jason S. McCarley

When individuals work together to make decisions in a signal detection task, they typically achieve greater sensitivity as a group than they could each achieve on their own. The present experiments investigate whether metacognitive, or Type 2, signal detection judgements would show a similar pattern of collaborative benefit. Thirty-two participants in Experiment 1 and sixty participants in Experiment 2 completed a signal detection task individually and in groups, and measures of Type 1 and Type 2 sensitivity were calculated from participants’ confidence judgments. Bayesian parameter estimates suggested that regardless of whether teams are given feedback on their performance (Experiment 1) or receive no feedback (Experiment 2), no credible differences were observed in metacognitive efficiency between the teams and the better members, nor between the teams and the worse members. These findings suggest that teams may self-assess their performance by deferring metacognitive judgments to the most metacognitively sensitive individual within the team, even without trial-by-trial feedback, rather than integrating their judgments and achieving increased metacognitive awareness of their own performance.


Sign in / Sign up

Export Citation Format

Share Document