scholarly journals Improving effect size estimation and statistical power with multi-echo fMRI and its impact on understanding the neural systems supporting mentalizing

NeuroImage ◽  
2016 ◽  
Vol 142 ◽  
pp. 55-66 ◽  
Author(s):  
Michael V. Lombardo ◽  
Bonnie Auyeung ◽  
Rosemary J. Holt ◽  
Jack Waldman ◽  
Amber N.V. Ruigrok ◽  
...  
2020 ◽  
Author(s):  
Giulia Bertoldo ◽  
Claudio Zandonella Callegher ◽  
Gianmarco Altoè

It is widely appreciated that many studies in psychological science suffer from low statistical power. One of the consequences of analyzing underpowered studies with thresholds of statistical significance, is a high risk of finding exaggerated effect size estimates, in the right or in the wrong direction. These inferential risks can be directly quantified in terms of Type M (magnitude) error and Type S (sign) error, which directly communicate the consequences of design choices on effect size estimation. Given a study design, Type M error is the factor by which a statistically significant effect is on average exaggerated. Type S error is the probability to find a statistically significant result in the opposite direction to the plausible one. Ideally, these errors should be considered during a prospective design analysis in the design phase of a study to determine the appropriate sample size. However, they can also be considered when evaluating studies’ results in a retrospective design analysis. In the present contribution we aim to facilitate the considerations of these errors in the research practice in psychology. For this reason we illustrate how to consider Type M and Type S errors in a design analysis using one of the most common effect size measures in psychology: Pearson correlation coefficient. We provide various examples and make the R functions freely available to enable researchers to perform design analysis for their research projects.


2015 ◽  
Author(s):  
Michael V. Lombardo ◽  
Bonnie Auyeung ◽  
Rosemary J. Holt ◽  
Jack Waldman ◽  
Amber N. V. Ruigrok ◽  
...  

AbstractFunctional magnetic resonance imaging (fMRI) research is routinely criticized for being statistically underpowered due to characteristically small sample sizes and much larger sample sizes are being increasingly recommended. Additionally, various sources of artifact inherent in fMRI data can have detrimental impact on effect size estimates and statistical power. Here we show how specific removal of non-BOLD artifacts can improve effect size estimation and statistical power in task-fMRI contexts, with particular application to the social-cognitive domain of mentalizing/theory of mind. Non-BOLD variability identification and removal is achieved in a biophysical and statistically principled manner by combining multi-echo fMRI acquisition and independent components analysis (ME-ICA). Group-level effect size estimates on two different mentalizing tasks were enhanced by ME-ICA at a median rate of 24% in regions canonically associated with mentalizing, while much more substantial boosts (40-149%) were observed in non-canonical cerebellar areas. This effect size boosting is primarily a consequence of reduction of non-BOLD noise at the subject-level, which then translates into consequent reductions in between-subject variance at the group-level. Power simulations demonstrate that enhanced effect size enables highly-powered studies at traditional sample sizes. Cerebellar effects observed after applying ME-ICA may be unobservable with conventional imaging at traditional sample sizes. Thus, ME-ICA allows for principled design-agnostic non-BOLD artifact removal that can substantially improve effect size estimates and statistical power in task-fMRI contexts. ME-ICA could help issues regarding statistical power and non-BOLD noise and enable potential for novel discovery of aspects of brain organization that are currently under-appreciated and not well understood.


1986 ◽  
Vol 58 (3) ◽  
pp. 739-742 ◽  
Author(s):  
F. Matthew Kramer ◽  
Robert W. Jeffery ◽  
Mary Kaye Snell

Loss of subjects during follow-up is a frequent occurrence in outcome research on habit disorders. This attrition may have undesirable effects on statistical power, effect-size estimation, and causal inferences. The present study investigated the effects of offering subjects a monetary incentive of $0.00, $5.00, or $15.00 as a cost effective alternative to normal follow-up procedures for attending a scheduled follow-up meeting. The results indicate that these modest incentives did not significantly enhance attendance at the follow-up visit. Suggestions for future applications of monetary incentives in follow-up data collection are provided.


Sign in / Sign up

Export Citation Format

Share Document