scholarly journals EFFECT SIZE-DRIVEN SAMPLE-SIZE PLANNING, RANDOMIZATION, AND MULTISITE USE IN L2 INSTRUCTED VOCABULARY ACQUISITION EXPERIMENTAL SAMPLES – CORRIGENDUM

Author(s):  
Joseph P. Vitta ◽  
Christopher Nicklin ◽  
Stuart McLean
2017 ◽  
Vol 28 (11) ◽  
pp. 1547-1562 ◽  
Author(s):  
Samantha F. Anderson ◽  
Ken Kelley ◽  
Scott E. Maxwell

The sample size necessary to obtain a desired level of statistical power depends in part on the population value of the effect size, which is, by definition, unknown. A common approach to sample-size planning uses the sample effect size from a prior study as an estimate of the population value of the effect to be detected in the future study. Although this strategy is intuitively appealing, effect-size estimates, taken at face value, are typically not accurate estimates of the population effect size because of publication bias and uncertainty. We show that the use of this approach often results in underpowered studies, sometimes to an alarming degree. We present an alternative approach that adjusts sample effect sizes for bias and uncertainty, and we demonstrate its effectiveness for several experimental designs. Furthermore, we discuss an open-source R package, BUCSS, and user-friendly Web applications that we have made available to researchers so that they can easily implement our suggested methods.


2019 ◽  
Author(s):  
Stefan L.K. Gruijters ◽  
Gjalt - Jorn Ygram Peters

Experimental intervention tests need to have sufficient sample size to constitute a robust test of the intervention’s effectiveness with reasonable precision and power. To estimate the required sample size adequately, researchers are required to specify an effect size. But what effect size should be used to plan the required sample size? Various inroads into how to select the a priori effect size have been suggested in the literature – including using conventions, prior research, and theoretical or practical importance. In this paper, we first discuss problems with some of the proposed methods of selecting the effect size for study planning. Subsequently, we lay out a method that provides a way out of many of these problems. The method requires setting a meaningful change definition, and is specifically suited for applied researchers interested in planning tests of intervention effectiveness. We provide a hands-on walk through of the method and provide easy-to-use R functions to implement it.


Author(s):  
Joseph P. Vitta ◽  
Christopher Nicklin ◽  
Stuart McLean

Abstract In this focused methodological synthesis, the sample construction procedures of 110 second language (L2) instructed vocabulary interventions were assessed in relation to effect size–driven sample-size planning, randomization, and multisite usage. These three areas were investigated because inferential testing makes better generalizations when researchers consider them during the sample construction process. Only nine reports used effect sizes to plan or justify sample sizes in any fashion, with only one engaging in an a priori power procedure referencing vocabulary-centric effect sizes from previous research. Randomized assignment was observed in 56% of the reports while no report involved randomized sampling. Approximately 15% of the samples observed were constructed from multiple sites and none of these empirically investigated the effect of site clustering. Leveraging the synthesized findings, we conclude by offering suggestions for future L2 instructed vocabulary researchers to consider a priori effect size–driven sample planning processes, randomization, and multisite usage when constructing samples.


Psychometrika ◽  
2021 ◽  
Author(s):  
Gwowen Shieh

A Correction to this paper has been published: https://doi.org/10.1007/s11336-019-09692-3


2019 ◽  
Author(s):  
Rob Cribbie ◽  
Nataly Beribisky ◽  
Udi Alter

Many bodies recommend that a sample planning procedure, such as traditional NHST a priori power analysis, is conducted during the planning stages of a study. Power analysis allows the researcher to estimate how many participants are required in order to detect a minimally meaningful effect size at a specific level of power and Type I error rate. However, there are several drawbacks to the procedure that render it “a mess.” Specifically, the identification of the minimally meaningful effect size is often difficult but unavoidable for conducting the procedure properly, the procedure is not precision oriented, and does not guide the researcher to collect as many participants as feasibly possible. In this study, we explore how these three theoretical issues are reflected in applied psychological research in order to better understand whether these issues are concerns in practice. To investigate how power analysis is currently used, this study reviewed the reporting of 443 power analyses in high impact psychology journals in 2016 and 2017. It was found that researchers rarely use the minimally meaningful effect size as a rationale for the chosen effect in a power analysis. Further, precision-based approaches and collecting the maximum sample size feasible are almost never used in tandem with power analyses. In light of these findings, we offer that researchers should focus on tools beyond traditional power analysis when sample planning, such as collecting the maximum sample size feasible.


Sign in / Sign up

Export Citation Format

Share Document