Promoting Knowledge Accumulation About Intervention Effects: Exploring Strategies for Standardizing Statistical Approaches and Effect Size Reporting

2021 ◽  
pp. 0013189X2110513
Author(s):  
Joseph A. Taylor ◽  
Terri Pigott ◽  
Ryan Williams

Toward the goal of more rapid knowledge accumulation via better meta-analyses, this article explores statistical approaches intended to increase the precision and comparability of effect sizes from education research. The featured estimate of the proposed approach is a standardized mean difference effect size whose numerator is a mean difference that has been adjusted for baseline differences in the outcome measure, at a minimum, and whose denominator is the total variance. The article describes the utility and efficiency of covariate adjustment through baseline measures and the need to standardize effects on a total variance that accounts for variation at multiple levels. As computation of the total variance can be complex in multilevel studies, a shiny application is provided to assist with computation of the total variance and subsequent effect size. Examples are provided for how to interpret and input the required calculator inputs.

2021 ◽  
Vol 3 (3) ◽  
Author(s):  
Pim Cuijpers

Background Most meta-analyses use the ‘standardised mean difference’ (effect size) to summarise the outcomes of studies. However, the effect size has important limitations that need to be considered. Method After a brief explanation of the standardized mean difference, limitations are discussed and possible solutions in the context of meta-analyses are suggested. Results When using the effect size, three major limitations have to be considered. First, the effect size is still a statistical concept and small effect sizes may have considerable clinical meaning while large effect sizes may not. Second, specific assumptions of the effect size may not be correct. Third, and most importantly, it is very difficult to explain what the meaning of the effect size is to non-researchers. As possible solutions, the use of the ‘binomial effect size display’ and the number-needed-to-treat are discussed. Furthermore, I suggest the use of binary outcomes, which are often easier to understand. However, it is not clear what the best binary outcome is for continuous outcomes. Conclusion The effect size is still useful, as long as the limitations are understood and also binary outcomes are given.


2013 ◽  
Vol 4 (4) ◽  
pp. 324-341 ◽  
Author(s):  
Larry V. Hedges ◽  
James E. Pustejovsky ◽  
William R. Shadish

2012 ◽  
Vol 3 (3) ◽  
pp. 224-239 ◽  
Author(s):  
Larry V. Hedges ◽  
James E. Pustejovsky ◽  
William R. Shadish

2019 ◽  
Vol 227 (4) ◽  
pp. 261-279 ◽  
Author(s):  
Frank Renkewitz ◽  
Melanie Keiner

Abstract. Publication biases and questionable research practices are assumed to be two of the main causes of low replication rates. Both of these problems lead to severely inflated effect size estimates in meta-analyses. Methodologists have proposed a number of statistical tools to detect such bias in meta-analytic results. We present an evaluation of the performance of six of these tools. To assess the Type I error rate and the statistical power of these methods, we simulated a large variety of literatures that differed with regard to true effect size, heterogeneity, number of available primary studies, and sample sizes of these primary studies; furthermore, simulated studies were subjected to different degrees of publication bias. Our results show that across all simulated conditions, no method consistently outperformed the others. Additionally, all methods performed poorly when true effect sizes were heterogeneous or primary studies had a small chance of being published, irrespective of their results. This suggests that in many actual meta-analyses in psychology, bias will remain undiscovered no matter which detection method is used.


Sign in / Sign up

Export Citation Format

Share Document