Moderators of relative confidence calibration: A meta-analysis of the across-subject relationship between confidence and accuracy
If one friend confidently tells us to buy product A while another friend thinks that product B is better but is not confident, we may go with the advice of our confident friend. Should we? The relationship between people’s confidence and accuracy has been of great interest in many fields, especially in the context of high-stakes situations like eye-witness testimony, but there is still little consensus about how much we should trust someone’s overall level of confidence. Here we examine the across-subject relationship between average accuracy and average confidence in 214 unique datasets from the Confidence Database. This approach allows us to empirically address this issue with unprecedented statistical power and check for the presence of various moderators. We find that the across-subject correlation between average accuracy and average confidence in this sample is R = .22. Importantly, this relationship is much stronger for memory than for perception tasks, as well as for confidence scales with fewer points. These results show that we should take one’s confidence seriously (and perhaps buy product A) and suggest several factors that moderate the relative consistency of how people make confidence judgments.