scholarly journals Implementation of Bayesian multiple comparison correction in the second-level analysis of fMRI data: With pilot analyses of simulation and real fMRI datasets based on voxelwise inference

2019 ◽  
Vol 11 (3) ◽  
pp. 157-169 ◽  
Author(s):  
Hyemin Han
2020 ◽  
Author(s):  
Hyemin Han

AbstractBayesFactorFMRI is a tool developed with R and Python to allow neuroimaging researchers to conduct Bayesian second-level analysis and Bayesian meta-analysis of fMRI image data with multiprocessing. This tool expedites computationally intensive Bayesian fMRI analysis through multiprocessing. Its GUI allows researchers who are not experts in computer programming to feasibly perform Bayesian fMRI analysis. BayesFactorFMRI is available via Zenodo and GitHub for download. It would be widely reused by neuroimaging researchers who intend to analyse their fMRI data with Bayesian analysis with better sensitivity compared with classical analysis while improving performance by distributing analysis tasks into multiple processors.


2019 ◽  
Author(s):  
Hyemin Han

AbstractWe developed and tested Bayesian multiple comparison correction method for Bayesian voxelwise second-level fMRI analysis with R. The performance of the developed method was tested with simulation and real image datasets. First, we compared false alarm and hit rates, which were used as proxies for selectivity and sensitivity, respectively, between Bayesian and classical inference were conducted. For the comparison, we created simulated images, added noise to the created images, and analyzed the noise-added images while applying Bayesian and classical multiple comparison correction methods. Second, we analyzed five real image datasets to examine how our Bayesian method worked in realistic settings. When the performance assessment was conducted, Bayesian correction method demonstrated good sensitivity (hit rate ≥ 75%) and acceptable selectivity (false alarm rate < 10%) when N ≤ 8. Furthermore, Bayesian correction method showed better sensitivity compared with classical correction method while maintaining the aforementioned acceptable selectivity.


2016 ◽  
Author(s):  
Joram Soch ◽  
Achim Pascal Meyer ◽  
John-Dylan Haynes ◽  
Carsten Allefeld

AbstractIn functional magnetic resonance imaging (fMRI), model quality of general linear models (GLMs) for first-level analysis is rarely assessed. In recent work (Soch et al., 2016: “How to avoid mismodelling in GLM-based fMRI data analysis: cross-validated Bayesian model selection”, NeuroImage, vol. 141, pp. 469-489; DOI: 10.1016/j. neuroimage.2016.07.047), we have introduced cross-validated Bayesian model selection (cvBMS) to infer the best model for a group of subjects and use it to guide second-level analysis. While this is the optimal approach given that the same GLM has to be used for all subjects, there is a much more efficient procedure when model selection only addresses nuisance variables and regressors of interest are included in all candidate models. In this work, we propose cross-validated Bayesian model averaging (cvBMA) to improve parameter estimates for these regressors of interest by combining information from all models using their posterior probabilities. This is particularly useful as different models can lead to different conclusions regarding experimental effects and the most complex model is not necessarily the best choice. We find that cvBMS can prevent not detecting established effects and that cvBMA can be more sensitive to experimental effects than just using even the best model in each subject or the model which is best in a group of subjects.


NeuroImage ◽  
2019 ◽  
Vol 194 ◽  
pp. 25-41 ◽  
Author(s):  
Xiaowei Zhuang ◽  
Zhengshi Yang ◽  
Karthik R. Sreenivasan ◽  
Virendra R. Mishra ◽  
Tim Curran ◽  
...  

2015 ◽  
Vol 53 (10) ◽  
pp. 1011-1023 ◽  
Author(s):  
Joan Francesc Alonso ◽  
Sergio Romero ◽  
Miguel Ángel Mañanas ◽  
Mónica Rojas ◽  
Jordi Riba ◽  
...  

2017 ◽  
Author(s):  
Xiao Chen ◽  
Bin Lu ◽  
Chao-Gan Yan

ABSTRACTConcerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability / replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 (40 per group)) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect “true” effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility.


2018 ◽  
Author(s):  
Xiaoying Pu ◽  
Matthew Kay

Tukey emphasized decades ago that taking exploratory findings as confirmatory is “destructively foolish”. We reframe recent conversations about the reliability of results from exploratory visual analytics—such as the multiple comparisons problem—in terms of Gelman and Loken’s garden of forking paths to lay out a design space for addressing the forking paths problem in visual analytics. This design space encompasses existing approaches to address the forking paths problem (multiple comparison correction) as well as solutions that have not been applied to exploratory visual analytics (regularization). We also discuss how perceptual bias correction techniques may be used to correct biases induced in analysts’ understanding of their data due to the forking paths problem, and outline how this problem can be cast as a threat to validity within Munzner’s Nested Model of visualization design. Finally, we suggest paper review guidelines to encourage reviewers to consider the forking paths problem when evaluating future designs of visual analytics tools.


2018 ◽  
Author(s):  
Xi-Ze Jia ◽  
Na Zhao ◽  
Barek Barton ◽  
Roxana Burciu ◽  
Nicolas Carrière ◽  
...  

AbstractThousands of papers using resting-state functional magnetic resonance imaging (RS-fMRI) have been published on brain disorders. Results in each paper may have survived correction for multiple comparison. However, since there have been no robust results from large scale meta-analysis, we do not know how many of published results are truly positives. The present meta-analytic work included 60 original studies, with 57 studies (4 datasets, 2266 participants) that used a between-group design and 3 studies (1 dataset, 107 participants) that employed a within-group design. To evaluate the effect size of brain disorders, a very large neuroimaging dataset ranging from neurological to psychiatric isorders together with healthy individuals have been analyzed. Parkinson’s disease off levodopa (PD-off) included 687 participants from 15 studies. PD on levodopa (PD-on) included 261 participants from 9 studies. Autism spectrum disorder (ASD) included 958 participants from 27 studies. The meta-analyses of a metric named amplitude of low frequency fluctuation (ALFF) showed that the effect size (Hedges’ g) was 0.19 - 0.39 for the 4 datasets using between-group design and 0.46 for the dataset using within-group design. The effect size of PD-off, PD-on and ASD were 0.23, 0.39, and 0.19, respectively. Using the meta-analysis results as the robust results, the between-group design results of each study showed high false negative rates (median 99%), high false discovery rates (median 86%), and low accuracy (median 1%), regardless of whether stringent or liberal multiple comparison correction was used. The findings were similar for 4 RS-fMRI metrics including ALFF, regional homogeneity, and degree centrality, as well as for another widely used RS-fMRI metric namely seed-based functional connectivity. These observations suggest that multiple comparison correction does not control for false discoveries across multiple studies when the effect sizes are relatively small. Meta-analysis on un-thresholded t-maps is critical for the recovery of ground truth. We recommend that to achieve high reproducibility through meta-analysis, the neuroimaging research field should share raw data or, at minimum, provide un-thresholded statistical images.


2019 ◽  
Vol 9 (8) ◽  
pp. 198 ◽  
Author(s):  
Hyemin Han ◽  
Andrea L. Glenn ◽  
Kelsie J. Dawson

A significant challenge for fMRI research is statistically controlling for false positives without omitting true effects. Although a number of traditional methods for multiple comparison correction exist, several alternative tools have been developed that do not rely on strict parametric assumptions, but instead implement alternative methods to correct for multiple comparisons. In this study, we evaluated three of these methods, Statistical non-Parametric Mapping (SnPM), 3DClustSim, and Threshold Free Cluster Enhancement (TFCE), by examining which method produced the most consistent outcomes even when spatially-autocorrelated noise was added to the original images. We assessed the false alarm rate and hit rate of each method after noise was applied to the original images.


Sign in / Sign up

Export Citation Format

Share Document