scholarly journals The drift diffusion model as the choice rule in reinforcement learning

2016 ◽  
Vol 24 (4) ◽  
pp. 1234-1251 ◽  
Author(s):  
Mads Lund Pedersen ◽  
Michael J. Frank ◽  
Guido Biele
2019 ◽  
Author(s):  
Jan Peters ◽  
Mark D’Esposito

AbstractSequential sampling models such as the drift diffusion model have a long tradition in research on perceptual decision-making, but mounting evidence suggests that these models can account for response time distributions that arise during reinforcement learning and value-based decision-making. Building on this previous work, we implemented the drift diffusion model as the choice rule in inter-temporal choice (temporal discounting) and risky choice (probability discounting) using a hierarchical Bayesian estimation scheme. We validated our approach in data from nine patients with focal lesions to the ventromedial prefrontal cortex / medial orbitofrontal cortex (vmPFC/mOFC) and nineteen age- and education-matched controls. Choice model parameters estimated via standard softmax action selection were reliably reproduced using the drift diffusion model as the choice rule, both for temporal discounting and risky choice. Model comparison revealed that, for both tasks, the data were best accounted for by a variant of the drift diffusion model including a non-linear mapping from value-differences to trial-wise drift rates. Posterior predictive checks of the winning models revealed a reasonably good fit to individual participants reaction time distributions. We then applied this modeling framework and 1) reproduced our previous results regarding temporal discounting in vmPFC/mOFC patients and 2) showed in a previously unpublished data set on risky choice that vmPFC/mOFC patients exhibit increased risk-taking relative to controls. Analyses of diffusion model parameters revealed that vmPFC/mOFC damage abolished neither value sensitivity nor asymptote of the drift rate. Rather, it substantially increased non-decision times and reduced response caution during risky choice. Our results highlight that novel insights can be gained from applying sequential sampling models in studies of inter-temporal and risky decision-making in cognitive neuroscience.


2020 ◽  
Vol 3 (4) ◽  
pp. 458-471 ◽  
Author(s):  
Mads L. Pedersen ◽  
Michael J. Frank

AbstractCognitive models have been instrumental for generating insights into the brain processes underlying learning and decision making. In reinforcement learning it has recently been shown that not only choice proportions but also their latency distributions can be well captured when the choice function is replaced with a sequential sampling model such as the drift diffusion model. Hierarchical Bayesian parameter estimation further enhances the identifiability of distinct learning and choice parameters. One caveat is that these models can be time-consuming to build, sample from, and validate, especially when models include links between neural activations and model parameters. Here we describe a novel extension to the widely used hierarchical drift diffusion model (HDDM) toolbox, which facilitates flexible construction, estimation, and evaluation of the reinforcement learning drift diffusion model (RLDDM) using hierarchical Bayesian methods. We describe the types of experiments most applicable to the model and provide a tutorial to illustrate how to perform quantitative data analysis and model evaluation. Parameter recovery confirmed that the method can reliably estimate parameters with varying numbers of synthetic subjects and trials. We also show that the simultaneous estimation of learning and choice parameters can improve the sensitivity to detect brain–behavioral relationships, including the impact of learned values and fronto-basal ganglia activity patterns on dynamic decision parameters.


2015 ◽  
Vol 122 (2) ◽  
pp. 312-336 ◽  
Author(s):  
Brandon M. Turner ◽  
Leendert van Maanen ◽  
Birte U. Forstmann

2014 ◽  
Vol 116 (19) ◽  
pp. 194504 ◽  
Author(s):  
Matthew P. Lumb ◽  
Myles A. Steiner ◽  
John F. Geisz ◽  
Robert J. Walters

Sign in / Sign up

Export Citation Format

Share Document