Supplemental Material for A Comparison of Conflict Diffusion Models in the Flanker Task Through Pseudolikelihood Bayes Factors

2019 ◽  
2020 ◽  
Vol 127 (1) ◽  
pp. 114-135 ◽  
Author(s):  
Nathan J. Evans ◽  
Mathieu Servant

2018 ◽  
Author(s):  
Nathan J. Evans ◽  
Mathieu Servant

Conflict tasks are one of the most widely studied paradigms within cognitive psychology, where participants are required to respond based on relevant sources of information while ignoring conflicting irrelevant sources of information. The flanker task, in particular, has been the focus of considerable modeling efforts, with only three models being able to provide a complete account of empirical choice response time distributions: the dual- stage two-phase model (DSTP), the shrinking spotlight model (SSP), and the diffusion model for conflict tasks (DMC). Although these models are grounded in different theoretical frameworks, can provide diverging measures of cognitive control, and are quantitatively distinguishable, no previous study has compared all three of these models in their ability to account for empirical data. Here, we perform a comparison of the precise quantitative predictions of these models through Bayes factors, using probability density approximation to generate a pseudo-likelihood estimate of the unknown probability density function, and thermodynamic integration via differential evolution to approximate the analytically intractable Bayes factors. We find that for every participant across three data sets from three separate research groups, DMC provides an inferior account of the data to DSTP and SSP, which has important theoretical implications regarding cognitive processes engaged in the flanker task, and practical implications for applying the models to flanker data. More generally, we argue that our combination of probability density approximation with marginal likelihood approximation – which we term pseudo-likelihood Bayes factors – provides a crucial step forward for the future of model comparison, where Bayes factors can be calculated between any models that can be simulated. We also discuss the limitations of simulation-based methods, such as the potential for approximation error, and suggest that researchers should use analytically or numerically computed likelihood functions when they are available and computationally tractable.


2011 ◽  
Vol 63 (4) ◽  
pp. 210-238 ◽  
Author(s):  
Corey N. White ◽  
Roger Ratcliff ◽  
Jeffrey J. Starns

2020 ◽  
Author(s):  
Mateo Leganes-Fonteneau ◽  
Ryan Bradley Scott ◽  
Theodora Duka ◽  
Zoltan Dienes

Research on implicit processes has revealed problems with awareness categorizations based on non-significant results. Moreover, post-hoc categorizations result in regression to the mean (RTM), by which aware participants are wrongly categorized as unaware. Using Bayes factors to obtain sensitive evidence for participants’ lack of knowledge may deal with non-significance being non-evidential but also may prevent regression-to-the-mean effects. Here we examine the reliability of a novel Bayesian awareness categorization procedure.Participants completed a reward learning task followed by a Flanker task measuring attention towards conditioned stimuli. They were categorized as B_Aware and B_Unaware of stimulus-outcome contingencies, and those with insensitive Bayes factors were deemed B_Insensitive. We found that performance for B_Unaware participants was below chance level using unbiased tests. This was further confirmed using a resampling procedure with multiple iterations, contrary to the prediction of RTM effects. Conversely, when categorizing participants using t-tests, t_Unaware participants showed RTM effects. We also propose a group boundary optimization procedure to determine the threshold at which regression to the mean is observed. Using Bayes factors instead of t-tests as a post-hoc categorization tool allows evaluating evidence of unawareness, which in turn helps avoid RTM. The reliability of the Bayesian awareness categorization procedure strengthens previous evidence for implicit reward conditioning. The toolbox used for the categorization procedure is detailed and made available. Post-hoc group selection can provide evidence for implicit processes; the relevance of RTM needs to be considered for each study and cannot simply be assumed to be a problem.


Decision ◽  
2016 ◽  
Vol 3 (2) ◽  
pp. 115-131 ◽  
Author(s):  
Helen Steingroever ◽  
Ruud Wetzels ◽  
Eric-Jan Wagenmakers

2020 ◽  
Vol 35 (5) ◽  
pp. 729-743 ◽  
Author(s):  
Christopher D. Erb ◽  
Dayna R. Touron ◽  
Stuart Marcovitch

Sign in / Sign up

Export Citation Format

Share Document