scholarly journals Contrasts and Correlations in Effect-Size Estimation

2000 ◽  
Vol 11 (6) ◽  
pp. 446-453 ◽  
Author(s):  
Ralph L. Rosnow ◽  
Robert Rosenthal ◽  
Donald B. Rubin

This article describes procedures for presenting standardized measures of effect size when contrasts are used to ask focused questions of data. The simplest contrasts consist of comparisons of two samples (e.g., based on the independent t statistic). Useful effect-size indices in this situation are members of the g family (e.g., Hedges's g and Cohen's d) and the Pearson r. We review expressions for calculating these measures and for transforming them back and forth, and describe how to adjust formulas for obtaining g or d from t, or r from g, when the sample sizes are unequal. The real-life implications of d or g calculated from t become problematic when there are more than two groups, but the correlational approach is adaptable and interpretable, although more complex than in the case of two groups. We describe a family of four conceptually related correlation indices: the alerting correlation, the contrast correlation, the effect-size correlation, and the BESD (binomial effect-size display) correlation. These last three correlations are identical in the simple setting of only two groups, but differ when there are more than two groups.

2015 ◽  
Author(s):  
Michael V. Lombardo ◽  
Bonnie Auyeung ◽  
Rosemary J. Holt ◽  
Jack Waldman ◽  
Amber N. V. Ruigrok ◽  
...  

AbstractFunctional magnetic resonance imaging (fMRI) research is routinely criticized for being statistically underpowered due to characteristically small sample sizes and much larger sample sizes are being increasingly recommended. Additionally, various sources of artifact inherent in fMRI data can have detrimental impact on effect size estimates and statistical power. Here we show how specific removal of non-BOLD artifacts can improve effect size estimation and statistical power in task-fMRI contexts, with particular application to the social-cognitive domain of mentalizing/theory of mind. Non-BOLD variability identification and removal is achieved in a biophysical and statistically principled manner by combining multi-echo fMRI acquisition and independent components analysis (ME-ICA). Group-level effect size estimates on two different mentalizing tasks were enhanced by ME-ICA at a median rate of 24% in regions canonically associated with mentalizing, while much more substantial boosts (40-149%) were observed in non-canonical cerebellar areas. This effect size boosting is primarily a consequence of reduction of non-BOLD noise at the subject-level, which then translates into consequent reductions in between-subject variance at the group-level. Power simulations demonstrate that enhanced effect size enables highly-powered studies at traditional sample sizes. Cerebellar effects observed after applying ME-ICA may be unobservable with conventional imaging at traditional sample sizes. Thus, ME-ICA allows for principled design-agnostic non-BOLD artifact removal that can substantially improve effect size estimates and statistical power in task-fMRI contexts. ME-ICA could help issues regarding statistical power and non-BOLD noise and enable potential for novel discovery of aspects of brain organization that are currently under-appreciated and not well understood.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Elizabeth Collins ◽  
Roger Watt

Statistical power is key to planning studies if understood and used correctly. Power is the probability of obtaining a statistically significant p-value, given a set alpha, sample size, and population effect size. The literature suggests that psychology studies are underpowered due to small sample sizes, and that researchers do not hold accurate intuitions about sensible sample sizes and associated levels of power. In this study, we surveyed 214 psychological researchers, and asked them about their experiences of using a priori power analysis, effect size estimation methods, post hoc power, and their understanding of what the term “power” actually means. Power analysis use was high, although participants reported difficulties with complex research designs, and effect size estimation. Participants also typically could not accurately define power. If psychological researchers are expected to compute a priori power analyses to plan their research, clearer educational material and guidelines should be made available.


Sign in / Sign up

Export Citation Format

Share Document