scholarly journals The bounded rationality of probability distortion

2019 ◽  
Author(s):  
Hang Zhang ◽  
Xiangjuan Ren ◽  
Laurence T. Maloney

AbstractIn decision-making under risk (DMR) participants’ choices are based on probability values systematically different from those that are objectively correct. Similar systematic distortions are found in tasks involving relative frequency judgments (JRF). These distortions limit performance in a wide variety of tasks and an evident question is, why do we systematically fail in our use of probability and relative frequency information?We propose a Bounded Log-Odds Model (BLO) of probability and relative frequency distortion based on three assumptions: (1) log-odds: probability and relative frequency are mapped to an internal log-odds scale, (2) boundedness: the range of representations of probability and relative frequency are bounded and the bounds change dynamically with task, and (3) variance compensation: the mapping compensates in part for uncertainty in probability and relative frequency values.We compared human performance in both DMR and JRF tasks to the predictions of the BLO model as well as eleven alternative models each missing one or more of the underlying BLO assumptions (factorial model comparison). The BLO model and its assumptions proved to be superior to any of the alternatives. In a separate analysis, we found that BLO accounts for individual participants’ data better than any previous model in the DMR literature.We also found that, subject to the boundedness limitation, participants’ choice of distortion approximately maximized the mutual information between objective task-relevant values and internal values, a form of bounded rationality.Significance StatementPeople distort probability in decision under risk and many other tasks. These distortions can be large, leading us to make markedly suboptimal decisions. There is no agreement on why we distort probability. Distortion changes systematically with task, hinting that distortions are dynamic compensations for some intrinsic “bound” on working memory. We first develop a model of the bound and the compensation process and then report an experiment showing that the model accounts for individual human performance in decision under risk and relative frequency judgments. Last, we show that the particular compensation in each experimental condition serve to maximize the mutual information between objective decision variables and their internal representations. We distort probability to compensate for our own working memory limitations.

2020 ◽  
Vol 117 (36) ◽  
pp. 22024-22034 ◽  
Author(s):  
Hang Zhang ◽  
Xiangjuan Ren ◽  
Laurence T. Maloney

In decision making under risk (DMR) participants’ choices are based on probability values systematically different from those that are objectively correct. Similar systematic distortions are found in tasks involving relative frequency judgments (JRF). These distortions limit performance in a wide variety of tasks and an evident question is, Why do we systematically fail in our use of probability and relative frequency information? We propose a bounded log-odds model (BLO) of probability and relative frequency distortion based on three assumptions: 1) log-odds: probability and relative frequency are mapped to an internal log-odds scale, 2) boundedness: the range of representations of probability and relative frequency are bounded and the bounds change dynamically with task, and 3) variance compensation: the mapping compensates in part for uncertainty in probability and relative frequency values. We compared human performance in both DMR and JRF tasks to the predictions of the BLO model as well as 11 alternative models, each missing one or more of the underlying BLO assumptions (factorial model comparison). The BLO model and its assumptions proved to be superior to any of the alternatives. In a separate analysis, we found that BLO accounts for individual participants’ data better than any previous model in the DMR literature. We also found that, subject to the boundedness limitation, participants’ choice of distortion approximately maximized the mutual information between objective task-relevant values and internal values, a form of bounded rationality.


2010 ◽  
Vol 22 (3) ◽  
pp. 437-446 ◽  
Author(s):  
Jane Klemen ◽  
Christian Büchel ◽  
Mira Bühler ◽  
Mareike M. Menz ◽  
Michael Rose

Attentional interference between tasks performed in parallel is known to have strong and often undesired effects. As yet, however, the mechanisms by which interference operates remain elusive. A better knowledge of these processes may facilitate our understanding of the effects of attention on human performance and the debilitating consequences that disruptions to attention can have. According to the load theory of cognitive control, processing of task-irrelevant stimuli is increased by attending in parallel to a relevant task with high cognitive demands. This is due to the relevant task engaging cognitive control resources that are, hence, unavailable to inhibit the processing of task-irrelevant stimuli. However, it has also been demonstrated that a variety of types of load (perceptual and emotional) can result in a reduction of the processing of task-irrelevant stimuli, suggesting a uniform effect of increased load irrespective of the type of load. In the present study, we concurrently presented a relevant auditory matching task [n-back working memory (WM)] of low or high cognitive load (1-back or 2-back WM) and task-irrelevant images at one of three object visibility levels (0%, 50%, or 100%). fMRI activation during the processing of the task-irrelevant visual stimuli was measured in the lateral occipital cortex and found to be reduced under high, compared to low, WM load. In combination with previous findings, this result is suggestive of a more generalized load theory, whereby cognitive load, as well as other types of load (e.g., perceptual), can result in a reduction of the processing of task-irrelevant stimuli, in line with a uniform effect of increased load irrespective of the type of load.


2021 ◽  
Author(s):  
Klaus Oberauer

Several measurement models have been proposed for data from the continuous-reproduction paradigm for studying visual working memory: The original mixture model (Zhang & Luck, 2008) and its extension (Bays, Catalao, & Husain, 2009); the interference measurement model (Oberauer, Stoneking, Wabersich, & Lin, 2017), and the target confusability competition model (Schurgin, Wixted, & Brady, 2020). This article describes a space of possible measurement models in which all existing models can be placed. The space is defined by three dimensions: (1) The choice of a activation function (von-Mises or Laplace), the choice of a response-selection function (variants of Luce’s choice rule or of signal detection theory), and whether or not memory precision is assumed to be a constant over manipulations affecting memory. A factorial combination of these three variables generates all possible models in the model space. Fitting all models to eight data sets revealed a new model as empirically most adequate, which combines a von-Mises activation function with a signal-detection response-selection rule. The precision parameter can be treated as a constant across many experimental manipulations, though it might vary with manipulations not yet explored. All modelling code and the raw data modelled are available on the OSF: osf.io/zwprv


2020 ◽  
Author(s):  
Long Luu ◽  
Alan A. Stocker

AbstractCategorical judgments can systematically bias the perceptual interpretation of stimulus features. However, it remained unclear whether categorical judgments directly modify working memory representations or, alternatively, generate these biases via an inference process down-stream from working memory. To address this question we ran two novel psychophysical experiments in which human subjects had to revert their categorical judgments about a stimulus feature, if incorrect based on feedback, before providing an estimate of the feature. If categorical judgments indeed directly altered sensory representations in working memory, subjects’ estimates should reflect some aspects of their initial (incorrect) categorical judgment in those trials.We found no traces of the initial categorical judgment. Rather, subjects seem to be able to flexibly switch their categorical judgment if needed and use the correct corresponding categorical prior to properly perform feature inference. A cross-validated model comparison also revealed that feedback may lead to selective memory recall such that only memory samples that are consistent with the categorical judgment are accepted for the inference process. Our results suggest that categorical judgments do not modify sensory information in working memory but rather act as top-down expectation in the subsequent sensory recall and inference process down-stream from working memory.


2020 ◽  
Author(s):  
Samuel Planton ◽  
Timo van Kerkoerle ◽  
Leïla Abbih ◽  
Maxime Maheu ◽  
Florent Meyniel ◽  
...  

The capacity to store information in working memory strongly depends upon the ability to recode the information in a compressed form. Here, we tested the theory that human adults encode binary sequences of stimuli in memory using a recursive compression algorithm akin to a “language of thought”, and capable of capturing nested patterns of repetitions and alternations. In five experiments, we probed memory for auditory or visual sequences using both subjective and objective measures. We used a sequence violation paradigm in which participants detected occasional violations in an otherwise fixed sequence. Both subjective ratings of complexity and objective sequence violation detection rates were well predicted by complexity, as measured by minimal description length (also known as Kolmogorov complexity) in the binary version of the “language of geometry”, a formal language previously found to account for the human encoding of complex spatial sequences in the proposed language. We contrasted the language model with a model based solely on surprise given the stimulus transition probabilities. While both models accounted for variance in the data, the language model dominated over the transition probability model for long sequences (with a number of elements far exceeding the limits of working memory). We use model comparison to show that the minimal description length in a recursive language provides a better fit than a variety of previous encoding models for sequences. The data support the hypothesis that, beyond the extraction of statistical knowledge, human sequence coding relies on an internal compression using language-like nested structures.


2017 ◽  
Author(s):  
Matthew R. Nassar ◽  
Julie C. Helmers ◽  
Michael J. Frank

AbstractThe nature of capacity limits for visual working memory has been the subject of an intense debate that has relied on models that assume items are encoded independently. Here we propose that instead, similar features are jointly encoded through a “chunking” process to optimize performance on visual working memory tasks. We show that such chunking can: 1) facilitate performance improvements for abstract capacity-limited systems, 2) be optimized through reinforcement, 3) be implemented by center-surround dynamics, and 4) increase effective storage capacity at the expense of recall precision. Human performance on a variant of a canonical working memory task demonstrated performance advantages, precision detriments, inter-item dependencies, and trial-to-trial behavioral adjustments diagnostic of performance optimization through center-surround chunking. Models incorporating center-surround chunking provided a better quantitative description of human performance in our study as well as in a meta-analytic dataset, and apparent differences in working memory capacity across individuals were attributable to individual differences in the implementation of chunking. Our results reveal a normative rationale for center-surround connectivity in working memory circuitry, call for re-evaluation of memory performance differences that have previously been attributed to differences in capacity, and support a more nuanced view of visual working memory capacity limitations: strategic tradeoff between storage capacity and memory precision through chunking contribute to flexible capacity limitations that include both discrete and continuous aspects.


Author(s):  
Selma Lugtmeijer ◽  
Nikki A. Lammers ◽  
Edward H. F. de Haan ◽  
Frank-Erik de Leeuw ◽  
Roy P. C. Kessels

AbstractThis review investigates the severity and nature of post-stroke working memory deficits with reference to the multi-component model of working memory. We conducted a systematic search in PubMed up to March 2019 with search terms for stroke and memory. Studies on adult stroke patients, that included a control group, and assessed working memory function, were selected. Effect sizes (Hedges’ g) were extracted from 50 studies (in total 3,084 stroke patients) based on the sample size, mean and standard deviation of patients and controls. Performance of stroke patients was compared to healthy controls on low-load (i.e. capacity) and high-load (executively demanding) working memory tasks, grouped by modality (verbal, non-verbal). A separate analysis compared patients in the sub-acute and the chronic stage. Longitudinal studies and effects of lesion location were systematically reviewed. Stroke patients demonstrated significant deficits in working memory with a moderate effect size for both low-load (Hedges’ g = -.58 [-.82 to -.43]) and high-load (Hedges’ g = -.59 [-.73 to -.45]) tasks. The effect sizes were comparable for verbal and non-verbal material. Systematically reviewing the literature showed that working memory deficits remain prominent in the chronic stage of stroke. Lesions in a widespread fronto-parietal network are associated with working memory deficits. Stroke patients show decrements of moderate magnitude in all subsystems of working memory. This review clearly demonstrates the global nature of the impairment in working memory post-stroke.


2017 ◽  
Vol 39 (2) ◽  
pp. 275-301 ◽  
Author(s):  
DANIEL FELLMAN ◽  
ANNA SOVERI ◽  
CHARLOTTE VIKTORSSON ◽  
SARAH HAGA ◽  
JOHANNES NYLUND ◽  
...  

ABSTRACTWorking memory (WM) is one of the most studied cognitive constructs in psychology, because of its relevance to human performance, including language processing. When measuring verbal WM for sentences, the reading span task is the most widely used WM measure for this purpose. However, comparable sentence-level updating tasks are missing. Hence, we sought to develop a WM updating task, which we termed the selective updating of sentences (SUS) task, which taps the ability to constantly update sentences. In two experiments with Finnish-speaking young adults, we examined the internal consistency and concurrent validity of the SUS task. It exhibited adequate internal consistency and correlated positively with well-established working memory measures. Moreover, the SUS task also showed positive correlations with verbal episodic memory tasks employing sentences and paragraphs. These results indicate that the SUS task is a promising new task for psycholinguistic studies addressing verbal WM updating.


2019 ◽  
Vol 13 (3-4) ◽  
pp. 269-278
Author(s):  
Laura Martignon ◽  
Kathryn Laskey

AbstractAfter a brief description of the four components of risk literacy and the tools for analyzing risky situations, decision strategies are introduced, These rules, which satisfy tenets of Bounded Rationality, are called fast and frugal trees. Fast and frugal trees serve as efficient heuristics for decision under risk. We describe the construction of fast and frugal trees and compare their robustness for prediction under risk with that of Bayesian networks. In particular, we analyze situations of risky decisions in the medical domain. We show that the performance of fast and frugal trees does not fall too far behind that of the more complex Bayesian networks.


Sign in / Sign up

Export Citation Format

Share Document