scholarly journals Dual strategies in human confidence judgments

2020 ◽  
Author(s):  
Andrea Bertana ◽  
Andrey Chetverikov ◽  
Ruben S. van Bergen ◽  
Sam Ling ◽  
Janneke F.M. Jehee

AbstractAlthough confidence is commonly believed to be an essential element in decision making, it remains unclear what gives rise to one’s sense of confidence. Recent Bayesian theories propose that confidence is computed, in part, from the degree of uncertainty in sensory evidence. Alternatively, observers can use physical properties of the stimulus as a heuristic to confidence. In the current study, we developed ideal observer models for either hypothesis and compared their predictions against human data obtained from psychophysical experiments. Participants reported the orientation of a stimulus, and their confidence in this estimate, under varying levels of internal and external noise. As predicted by the Bayesian model, we found a consistent link between confidence and behavioral variability for a given stimulus orientation. Confidence was higher when orientation estimates were more precise, for both internal and external sources of noise. However, we observed the inverse pattern when comparing between stimulus orientations: although observers gave more precise orientation estimates for cardinal orientations (a phenomenon known as the oblique effect), they were more confident about oblique orientations. We show that these results are well explained by a strategy to confidence that is based on the perceived amount of noise in the stimulus. Altogether, our results suggest that confidence is not always computed from the degree of uncertainty in one’s perceptual evidence, but can instead be based on visual cues that function as simple heuristics to confidence.

2018 ◽  
Author(s):  
Abdellah Fourtassi ◽  
Michael C. Frank

Identifying a spoken word in a referential context requires both the ability to integrate multimodal input and the ability to reason under uncertainty. How do these tasks interact with one another? We study how adults identify novel words under joint uncertainty in the auditory and visual modalities and we propose an ideal observer model of how cues in these modalities are combined optimally. Model predictions are tested in four experiments where recognition is made under various sources of uncertainty. We found that participants use both auditory and visual cues to recognize novel words. When the signal is not distorted with environmental noise, participants weight the auditory and visual cues optimally, that is, according to the relative reliability of each modality. In contrast, when one modality has noise added to it, human perceivers systematically prefer the unperturbed modality to a greater extent than the optimal model does. This work extends the literature on perceptual cue combination to the case of word recognition in a referential context. In addition, this context offers a link to the study of multimodal information in word meaning learning.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Philipp Schustek ◽  
Alexandre Hyafil ◽  
Rubén Moreno-Bote

AbstractOur immediate observations must be supplemented with contextual information to resolve ambiguities. However, the context is often ambiguous too, and thus it should be inferred itself to guide behavior. Here, we introduce a novel hierarchical task (airplane task) in which participants should infer a higher-level, contextual variable to inform probabilistic inference about a hidden dependent variable at a lower level. By controlling the reliability of past sensory evidence through varying the sample size of the observations, we find that humans estimate the reliability of the context and combine it with current sensory uncertainty to inform their confidence reports. Behavior closely follows inference by probabilistic message passing between latent variables across hierarchical state representations. Commonly reported inferential fallacies, such as sample size insensitivity, are not present, and neither did participants appear to rely on simple heuristics. Our results reveal uncertainty-sensitive integration of information at different hierarchical levels and temporal scales.


2015 ◽  
Vol 30 (S2) ◽  
pp. S113-S114 ◽  
Author(s):  
P. Leptourgos ◽  
C.E. Notredame ◽  
R. Jardri ◽  
S. Denève

Recently, Jardri and Denève proposed that positive symptoms in schizophrenia could be generated by an imbalance between excitation and inhibition in brain networks, which leads to circular inference, an aberrant form of inference where messages (bottom up and/or top down) are counted more than once and thus, are overweighted [1]. Moreover, they postulated that psychotic symptoms are caused by a system that “expects what it senses” and as a result attributes extreme weight even to weak sensory evidences. Their hypothesis was then validated by a probabilistic inference task (in prep.). Here, we put forward a new experimental study that could validate the circular inference framework in the domain of visual perception. Initially, we restricted ourselves to healthy controls, whose tendencies for psychotic symptoms were measured using appropriate scales. We investigated the computations performed by perceptual systems when facing ambiguous sensory evidence. In those cases, perception is known to oscillate between two interpretations, a phenomenon known as bistable perception. More specifically, we asked how prior expectations and visual cues affect the dynamics of bistability. Participants looked at a Necker cube that was continuously displayed on the screen and reported their percept every time they heard a sound [2]. We manipulated sensory evidence by adding shades to the stimuli and prior expectations by giving different instructions concerning the presence of an implicit bias [3]. We showed that both prior expectations and visual cues significantly affect bistability, using both static and dynamic measures. We also found that the behavior could be well fitted by Bayesian models (“simple” Bayes, hierarchical Bayesian model with Markovian statistics). Preliminary results from patients will also be presented.


2019 ◽  
Author(s):  
Benjamin M. Chin ◽  
Johannes Burge

AbstractA core goal of visual neuroscience is to predict human perceptual performance from natural signals. Performance in any natural task can be impacted by at least three sources of uncertainty: stimulus variability, internal noise, and sub-optimal computations. Determining the relative importance of these factors has been a focus of interest for decades, but most successes have been achieved with simple tasks and simple stimuli. Drawing quantitative links directly from natural signals to perceptual performance has proven a substantial challenge. Here, we develop an image-computable (pixels in, estimates out) Bayesian ideal observer that makes optimal use of the statistics relating image movies to speed. The optimal computations bear striking resemblance to descriptive models proposed to account for neural activity in area MT. We develop a model based on the ideal, stimulate it with naturalistic signals, predict the behavioral signatures of each performance-limiting factor, and test the predictions in an interlocking series of speed discrimination experiments. The critical experiment collects human responses to repeated presentations of each unique image movie. The model, highly constrained by the earlier experiments, tightly predicts human response consistency without free parameters. This result implies that human observers use near-optimal computations to estimate speed, and that human performance is near-exclusively limited by natural stimulus variability and internal noise. The results demonstrate that human performance can be predicted from a task-specific statistical analysis of naturalistic stimuli, show that image-computable ideal observer analysis can be generalized from simple to natural stimuli, and encourage similar analyses in other domains.


2018 ◽  
Author(s):  
Siyu Wang ◽  
Robert C Wilson

Human decision making is inherently variable. While this variability is often seen as a sign of suboptimality in human behavior, recent work suggests that randomness can actually be adaptive. An example arises when we must choose between exploring unknown options or exploiting options we know well. A little randomness in these `explore-exploit' decisions is remarkably effective as it encourages us to explore options we might otherwise ignore. Moreover, people actually use such `random exploration' in practice, increasing their behavioral variability when it is more valuable to explore. Despite this progress, the nature of adaptive `decision noise' for exploration is unknown -- specifically whether it is generated internally, from stochastic processes in the brain, or externally, from stochastic stimuli in the world. Here we show that, while both internal and external noise drive variability in behavior, the noise driving random exploration is predominantly internal. This suggests that random exploration depends on adaptive noise processes in the brain which are subject to cognitive control.


2018 ◽  
Author(s):  
Philipp Schustek ◽  
Rubén Moreno-Bote

Because of uncertainty inherent in perception, our immediate observations must be supplemented with contextual information to resolve ambiguities. However, often context too is ambiguous, and thus it should be inferred itself to guide behavior. We developed a novel hierarchical task where participants should infer a higher-level, contextual variable to inform probabilistic inference about a hidden dependent variable at a lower level. By controlling the reliability of the past sensory evidence through sample size, we found that humans estimate the reliability of the context and combine it with current sensory uncertainty to inform their confidence reports. Indeed, behavior closely follows inference by probabilistic message passing between latent variables across hierarchical state representations. Despite the sophistication of our task, commonly reported inferential fallacies, such as sample size insensitivity, are not present, and neither do participants appear to rely on simple heuristics. Our results reveal ubiquitous probabilistic representations of uncertainty at different hierarchical levels and temporal scales of the environment.


2017 ◽  
Author(s):  
Shan Shen ◽  
Wei Ji Ma

ABSTRACTGiven the same sensory stimuli in the same task, human observers do not always make the same response. Well-known sources of behavioral variability are sensory noise and guessing. Visual short-term memory studies have suggested that the precision of the sensory noise is itself variable. However, it is unknown whether precision is also variable in perceptual tasks without a memory component. We searched for evidence for variable precision in 11 visual perception tasks with a single relevant feature, orientation. We specifically examined the effect of distractor stimuli: distractors were absent, homogeneous and fixed across trials, homogeneous and variable, or heterogeneous and variable. We first considered four models: with and without guessing, and with and without variability in precision. We quantified the importance of both factors using six metrics: factor knock-in difference, factor knock-out difference, and log factor posterior ratio, each based on AIC or BIC. According to all six metrics, we found strong evidence for variable precision in five experiments. Next, we extended our model space to include potential confounding factors: the oblique effect and decision noise. This left strong evidence for variable precision in only one experiment, in which distractors were homogeneous but variable. Finally, when we considered suboptimal decision rules, the evidence also disappeared in this experiment. Our results provide little evidence for variable precision overall and only a hint when distractors are variable. Methodologically, the results underline the importance of including multiple factors in factorial model comparison: testing for only two factors would have yielded an incorrect conclusion.


Author(s):  
Gregory L. Finch ◽  
Richard G. Cuddihy

The elemental composition of individual particles is commonly measured by using energydispersive spectroscopic microanalysis (EDS) of samples excited with electron beam irradiation. Similarly, several investigators have characterized particles by using external monochromatic X-irradiation rather than electrons. However, there is little available information describing measurements of particulate characteristic X rays produced not from external sources of radiation, but rather from internal radiation contained within the particle itself. Here, we describe the low-energy (< 20 KeV) characteristic X-ray spectra produced by internal radiation self-excitation of two general types of particulate samples; individual radioactive particles produced during the Chernobyl nuclear reactor accident and radioactive fused aluminosilicate particles (FAP). In addition, we compare these spectra with those generated by conventional EDS.Approximately thirty radioactive particle samples from the Chernobyl accident were on a sample of wood that was near the reactor when the accident occurred. Individual particles still on the wood were microdissected from the bulk matrix after bulk autoradiography.


2014 ◽  
Vol 23 (3) ◽  
pp. 132-139 ◽  
Author(s):  
Lauren Zubow ◽  
Richard Hurtig

Children with Rett Syndrome (RS) are reported to use multiple modalities to communicate although their intentionality is often questioned (Bartolotta, Zipp, Simpkins, & Glazewski, 2011; Hetzroni & Rubin, 2006; Sigafoos et al., 2000; Sigafoos, Woodyatt, Tuckeer, Roberts-Pennell, & Pittendreigh, 2000). This paper will present results of a study analyzing the unconventional vocalizations of a child with RS. The primary research question addresses the ability of familiar and unfamiliar listeners to interpret unconventional vocalizations as “yes” or “no” responses. This paper will also address the acoustic analysis and perceptual judgments of these vocalizations. Pre-recorded isolated vocalizations of “yes” and “no” were presented to 5 listeners (mother, father, 1 unfamiliar, and 2 familiar clinicians) and the listeners were asked to rate the vocalizations as either “yes” or “no.” The ratings were compared to the original identification made by the child's mother during the face-to-face interaction from which the samples were drawn. Findings of this study suggest, in this case, the child's vocalizations were intentional and could be interpreted by familiar and unfamiliar listeners as either “yes” or “no” without contextual or visual cues. The results suggest that communication partners should be trained to attend to eye-gaze and vocalizations to ensure the child's intended choice is accurately understood.


Sign in / Sign up

Export Citation Format

Share Document