scholarly journals Erratum to: Power Analysis and Sample Size Planning in Ancova Designs

Psychometrika ◽  
2021 ◽  
Author(s):  
Gwowen Shieh

A Correction to this paper has been published: https://doi.org/10.1007/s11336-019-09692-3

2021 ◽  
Author(s):  
Samuel Donnelly ◽  
Terrence D. Jorgensen ◽  
Cort Rudolph

Conceptual and statistical models that include conditional indirect effects (i.e., so-called “moderated mediation” models) are increasingly popular in the behavioral sciences. Although there is ample guidance in the literature for how to specify and test such models, there is scant advice regarding how to best design studies for such purposes, and this especially includes techniques for sample size planning (i.e., “power analysis”). In this paper, we discuss challenges in sample size planning for moderated mediation models and offer a tutorial for conducting Monte Carlo simulations in the specific case where one has categorical exogenous variables. Such a scenario is commonly faced when one is considering testing conditional indirect effects in experimental research, wherein the (assumed) predictor and moderator variables are manipulated factors and the (assumed) mediator and outcome variables are observed/measured variables. To support this effort, we offer example data and reproducible R code that constitutes a “toolkit” to aid researchers in the design of research to test moderated mediation models.


1990 ◽  
Vol 47 (1) ◽  
pp. 2-15 ◽  
Author(s):  
Randall M. Peterman

Ninety-eight percent of recently surveyed papers in fisheries and aquatic sciences that did not reject some null hypothesis (H0) failed to report β, the probability of making a type II error (not rejecting H0 when it should have been), or statistical power (1 – β). However, 52% of those papers drew conclusions as if H0 were true. A false H0 could have been missed because of a low-power experiment, caused by small sample size or large sampling variability. Costs of type II errors can be large (for example, for cases that fail to detect harmful effects of some industrial effluent or a significant effect of fishing on stock depletion). Past statistical power analyses show that abundance estimation techniques usually have high β and that only large effects are detectable. I review relationships among β, power, detectable effect size, sample size, and sampling variability. I show how statistical power analysis can help interpret past results and improve designs of future experiments, impact assessments, and management regulations. I make recommendations for researchers and decision makers, including routine application of power analysis, more cautious management, and reversal of the burden of proof to put it on industry, not management agencies.


2019 ◽  
Author(s):  
Rob Cribbie ◽  
Nataly Beribisky ◽  
Udi Alter

Many bodies recommend that a sample planning procedure, such as traditional NHST a priori power analysis, is conducted during the planning stages of a study. Power analysis allows the researcher to estimate how many participants are required in order to detect a minimally meaningful effect size at a specific level of power and Type I error rate. However, there are several drawbacks to the procedure that render it “a mess.” Specifically, the identification of the minimally meaningful effect size is often difficult but unavoidable for conducting the procedure properly, the procedure is not precision oriented, and does not guide the researcher to collect as many participants as feasibly possible. In this study, we explore how these three theoretical issues are reflected in applied psychological research in order to better understand whether these issues are concerns in practice. To investigate how power analysis is currently used, this study reviewed the reporting of 443 power analyses in high impact psychology journals in 2016 and 2017. It was found that researchers rarely use the minimally meaningful effect size as a rationale for the chosen effect in a power analysis. Further, precision-based approaches and collecting the maximum sample size feasible are almost never used in tandem with power analyses. In light of these findings, we offer that researchers should focus on tools beyond traditional power analysis when sample planning, such as collecting the maximum sample size feasible.


2018 ◽  
Vol 52 (4) ◽  
pp. 341-350 ◽  
Author(s):  
Michael FW Festing

Scientists using laboratory animals are under increasing pressure to justify their sample sizes using a “power analysis”. In this paper I review the three methods currently used to determine sample size: “tradition” or “common sense”, the “resource equation” and the “power analysis”. I explain how, using the “KISS” approach, scientists can make a provisional choice of sample size using any method, and then easily estimate the effect size likely to be detectable according to a power analysis. Should they want to be able to detect a smaller effect they can increase their provisional sample size and recalculate the effect size. This is simple, does not need any software and provides justification for the sample size in the terms used in a power analysis.


2018 ◽  
Vol 90 (21) ◽  
pp. 12485-12492 ◽  
Author(s):  
Nairveen Ali ◽  
Sophie Girnus ◽  
Petra Rösch ◽  
Jürgen Popp ◽  
Thomas Bocklitz

Sign in / Sign up

Export Citation Format

Share Document