scholarly journals A model-based fMRI analysis with hierarchical Bayesian parameter estimation.

Decision ◽  
2013 ◽  
Vol 1 (S) ◽  
pp. 8-23 ◽  
Author(s):  
Woo-Young Ahn ◽  
Adam Krawitz ◽  
Woojae Kim ◽  
Jerome R. Busemeyer ◽  
Joshua W. Brown
2011 ◽  
Vol 4 (2) ◽  
pp. 95-110 ◽  
Author(s):  
Woo-Young Ahn ◽  
Adam Krawitz ◽  
Woojae Kim ◽  
Jerome R. Busemeyer ◽  
Joshua W. Brown

2020 ◽  
Author(s):  
Jan Peters ◽  
Stefanie Brassen ◽  
Uli Bromberg ◽  
Christian Büchel ◽  
Laura Sasse ◽  
...  

AbstractTemporal discounting refers to the tendency of humans and many animals to devalue rewards as a function of time. Steep discounting of value over time is associated with a range of psychiatric disorders, including substance use disorders and behavioral addictions, and therefore of potentially high clinical relevance. One cognitive factor that has repeatedly been shown to reduce temporal discounting in humans is episodic future thinking, the process of vividly imagining future outcomes, which has been linked to hippocampal mechanisms in a number of studies. However, the analytical approaches used to quantify the behavioral effects have varied between studies, which complicates a direct comparison of the obtained effect sizes. Here we re-analyzed temporal discounting data from previously published functional magnetic resonance imaging (fMRI) and behavioral studies (six data sets from five papers, n=204 participants in total) using an identical model structure and hierarchical Bayesian parameter estimation procedure. Analyses confirmed that engagement in episodic future thinking leads to robust and and consistent reductions in temporal discounting with on average medium effect sizes. In contrast, effects on choice consistency (decision noise) where small and with inconsistent directionality. We provide standardized and unstandardized effect size estimates for each data set and discuss clinical implications as well as issues of hierarchical Bayesian parameter estimation.


2020 ◽  
Vol 3 (4) ◽  
pp. 458-471 ◽  
Author(s):  
Mads L. Pedersen ◽  
Michael J. Frank

AbstractCognitive models have been instrumental for generating insights into the brain processes underlying learning and decision making. In reinforcement learning it has recently been shown that not only choice proportions but also their latency distributions can be well captured when the choice function is replaced with a sequential sampling model such as the drift diffusion model. Hierarchical Bayesian parameter estimation further enhances the identifiability of distinct learning and choice parameters. One caveat is that these models can be time-consuming to build, sample from, and validate, especially when models include links between neural activations and model parameters. Here we describe a novel extension to the widely used hierarchical drift diffusion model (HDDM) toolbox, which facilitates flexible construction, estimation, and evaluation of the reinforcement learning drift diffusion model (RLDDM) using hierarchical Bayesian methods. We describe the types of experiments most applicable to the model and provide a tutorial to illustrate how to perform quantitative data analysis and model evaluation. Parameter recovery confirmed that the method can reliably estimate parameters with varying numbers of synthetic subjects and trials. We also show that the simultaneous estimation of learning and choice parameters can improve the sensitivity to detect brain–behavioral relationships, including the impact of learned values and fronto-basal ganglia activity patterns on dynamic decision parameters.


2021 ◽  
Vol 54 (6) ◽  
pp. 244-250
Author(s):  
Viktoria Kleyman ◽  
Manuel Schaller ◽  
Mitsuru Wilson ◽  
Mario Mordmüller ◽  
Ralf Brinkmann ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document