Effect size estimation for one-sample multiple-choice-type data: Design, analysis, and meta-analysis.

1989 ◽  
Vol 106 (2) ◽  
pp. 332-337 ◽  
Author(s):  
Robert Rosenthal ◽  
Donald B. Rubin
2020 ◽  
Author(s):  
Giulia Bertoldo ◽  
Claudio Zandonella Callegher ◽  
Gianmarco Altoè

It is widely appreciated that many studies in psychological science suffer from low statistical power. One of the consequences of analyzing underpowered studies with thresholds of statistical significance, is a high risk of finding exaggerated effect size estimates, in the right or in the wrong direction. These inferential risks can be directly quantified in terms of Type M (magnitude) error and Type S (sign) error, which directly communicate the consequences of design choices on effect size estimation. Given a study design, Type M error is the factor by which a statistically significant effect is on average exaggerated. Type S error is the probability to find a statistically significant result in the opposite direction to the plausible one. Ideally, these errors should be considered during a prospective design analysis in the design phase of a study to determine the appropriate sample size. However, they can also be considered when evaluating studies’ results in a retrospective design analysis. In the present contribution we aim to facilitate the considerations of these errors in the research practice in psychology. For this reason we illustrate how to consider Type M and Type S errors in a design analysis using one of the most common effect size measures in psychology: Pearson correlation coefficient. We provide various examples and make the R functions freely available to enable researchers to perform design analysis for their research projects.


2017 ◽  
Author(s):  
Robbie Cornelis Maria van Aert ◽  
Marcel A. L. M. van Assen

The unrealistic high rate of positive results within psychology increased the attention for replication research. Researchers who conduct a replication and want to statistically combine the results of their replication with a statistically significant original study encounter problems when using traditional meta-analysis techniques. The original study’s effect size is most probably overestimated because of it being statistically significant and this bias is not taken into consideration in traditional meta-analysis. We developed a hybrid method that does take statistical significance of the original study into account and enables (a) accurate effect size estimation, (b) estimation of a confidence interval, and (c) testing of the null hypothesis of no effect. We analytically approximate the performance of the hybrid method and describe its good statistical properties. Applying the hybrid method to the data of the Reproducibility Project Psychology (Open Science Collaboration, 2015) demonstrated that the conclusions based on the hybrid method are often in line with those of the replication, suggesting that many published psychological studies have smaller effect sizes than reported in the original study and that some effects may be even absent. We offer hands-on guidelines for how to statistically combine an original study and replication, and developed a web-based application (https://rvanaert.shinyapps.io/hybrid) for applying the hybrid method.


Sign in / Sign up

Export Citation Format

Share Document