scholarly journals Measurement models for visual working memory—A factorial model comparison.

2021 ◽  
Author(s):  
Klaus Oberauer
2021 ◽  
Author(s):  
Klaus Oberauer

Several measurement models have been proposed for data from the continuous-reproduction paradigm for studying visual working memory: The original mixture model (Zhang & Luck, 2008) and its extension (Bays, Catalao, & Husain, 2009); the interference measurement model (Oberauer, Stoneking, Wabersich, & Lin, 2017), and the target confusability competition model (Schurgin, Wixted, & Brady, 2020). This article describes a space of possible measurement models in which all existing models can be placed. The space is defined by three dimensions: (1) The choice of a activation function (von-Mises or Laplace), the choice of a response-selection function (variants of Luce’s choice rule or of signal detection theory), and whether or not memory precision is assumed to be a constant over manipulations affecting memory. A factorial combination of these three variables generates all possible models in the model space. Fitting all models to eight data sets revealed a new model as empirically most adequate, which combines a von-Mises activation function with a signal-detection response-selection rule. The precision parameter can be treated as a constant across many experimental manipulations, though it might vary with manipulations not yet explored. All modelling code and the raw data modelled are available on the OSF: osf.io/zwprv


2022 ◽  
Author(s):  
Jamal Rodgers Williams ◽  
Maria Martinovna Robinson ◽  
Mark Schurgin ◽  
John Wixted ◽  
Timothy F. Brady

Change detection tasks are commonly used to measure and understand the nature of visual working memory capacity. Across two experiments, we examine whether the nature of the latent memory signals used to perform change detection are continuous or all-or-none, and consider the implications for proper measurement of performance. In Experiment 1, we find evidence from confidence reports that visual working memory is continuous in strength, with strong support for equal variance signal detection models. We then tested a critical implication of this result without relying on model comparison or confidence reports in Experiment 2 by asking whether a simple instruction change would improve performance when measured with K, an all-or-none-measure, compared to d’, a measure based on continuous strength signals. We found strong evidence that K values increased by roughly 30% despite no change in the underlying memory signals. By contrast, we found that d’ is fixed across these same instructions, demonstrating that it correctly separates response criterion from memory performance. Overall, our data call into question a large body of work using threshold measures, like K, to analyze change detection data since this metric confounds response bias with memory performance in standard change detection tasks.


2018 ◽  
Vol 41 ◽  
Author(s):  
Wei Ji Ma

AbstractGiven the many types of suboptimality in perception, I ask how one should test for multiple forms of suboptimality at the same time – or, more generally, how one should compare process models that can differ in any or all of the multiple components. In analogy to factorial experimental design, I advocate for factorial model comparison.


Sign in / Sign up

Export Citation Format

Share Document