Statistical Power for Causally Defined Indirect Effects in Group-Randomized Trials With Individual-Level Mediators

2017 ◽  
Vol 42 (5) ◽  
pp. 499-530 ◽  
Author(s):  
Benjamin Kelcey ◽  
Nianbo Dong ◽  
Jessaca Spybrook ◽  
Kyle Cox
Methodology ◽  
2021 ◽  
Vol 17 (2) ◽  
pp. 92-110
Author(s):  
Nianbo Dong ◽  
Jessaca Spybrook ◽  
Benjamin Kelcey ◽  
Metin Bulus

Researchers often apply moderation analyses to examine whether the effects of an intervention differ conditional on individual or cluster moderator variables such as gender, pretest, or school size. This study develops formulas for power analyses to detect moderator effects in two-level cluster randomized trials (CRTs) using hierarchical linear models. We derive the formulas for estimating statistical power, minimum detectable effect size difference and 95% confidence intervals for cluster- and individual-level moderators. Our framework accommodates binary or continuous moderators, designs with or without covariates, and effects of individual-level moderators that vary randomly or nonrandomly across clusters. A small Monte Carlo simulation confirms the accuracy of our formulas. We also compare power between main effect analysis and moderation analysis, discuss the effects of mis-specification of the moderator slope (randomly vs. non-randomly varying), and conclude with directions for future research. We provide software for conducting a power analysis of moderator effects in CRTs.


2007 ◽  
Vol 29 (1) ◽  
pp. 60-87 ◽  
Author(s):  
Larry V. Hedges ◽  
E. C. Hedberg

Experiments that assign intact groups to treatment conditions are increasingly common in social research. In educational research, the groups assigned are often schools. The design of group-randomized experiments requires knowledge of the intraclass correlation structure to compute statistical power and sample sizes required to achieve adequate power. This article provides a compilation of intraclass correlation values of academic achievement and related covariate effects that could be used for planning group-randomized experiments in education. It also provides variance component information that is useful in planning experiments involving covariates. The use of these values to compute the statistical power of group-randomized experiments is illustrated.


2020 ◽  
pp. 107699862096149
Author(s):  
Nianbo Dong ◽  
Benjamin Kelcey ◽  
Jessaca Spybrook

Past research has demonstrated that treatment effects frequently vary across sites (e.g., schools) and that such variation can be explained by site-level or individual-level variables (e.g., school size or gender). The purpose of this study is to develop a statistical framework and tools for the effective and efficient design of multisite randomized trials (MRTs) probing moderated treatment effects. The framework considers three core facets of such designs: (a) Level 1 and Level 2 moderators, (b) random and nonrandomly varying slopes (coefficients) of the treatment variable and its interaction terms with the moderators, and (c) binary and continuous moderators. We validate the formulas for calculating statistical power and the minimum detectable effect size difference with simulations, probe its sensitivity to model assumptions, execute the formulas in accessible software, demonstrate an application, and provide suggestions in designing MRTs probing moderated treatment effects.


Author(s):  
John A. Gallis ◽  
Fan Li ◽  
Elizabeth L. Turner

Cluster randomized trials, where clusters (for example, schools or clinics) are randomized to comparison arms but measurements are taken on individuals, are commonly used to evaluate interventions in public health, education, and the social sciences. Analysis is often conducted on individual-level outcomes, and such analysis methods must consider that outcomes for members of the same cluster tend to be more similar than outcomes for members of other clusters. A popular individual-level analysis technique is generalized estimating equations (GEE). However, it is common to randomize a small number of clusters (for example, 30 or fewer), and in this case, the GEE standard errors obtained from the sandwich variance estimator will be biased, leading to inflated type I errors. Some bias-corrected standard errors have been proposed and studied to account for this finite-sample bias, but none has yet been implemented in Stata. In this article, we describe several popular bias corrections to the robust sandwich variance. We then introduce our newly created command, xtgeebcv, which will allow Stata users to easily apply finite-sample corrections to standard errors obtained from GEE models. We then provide examples to demonstrate the use of xtgeebcv. Finally, we discuss suggestions about which finite-sample corrections to use in which situations and consider areas of future research that may improve xtgeebcv.


2020 ◽  
Vol 45 (4) ◽  
pp. 446-474
Author(s):  
Zuchao Shen ◽  
Benjamin Kelcey

Conventional optimal design frameworks consider a narrow range of sampling cost structures that thereby constrict their capacity to identify the most powerful and efficient designs. We relax several constraints of previous optimal design frameworks by allowing for variable sampling costs in cluster-randomized trials. The proposed framework introduces additional design considerations and has the potential to identify designs with more statistical power, even when some parameters are constrained due to immutable practical concerns. The results also suggest that the gains in efficiency introduced through the expanded framework are fairly robust to misspecifications of the expanded cost structure and concomitant design parameters (e.g., intraclass correlation coefficient). The proposed framework is implemented in the R package odr.


1999 ◽  
Vol 18 (5) ◽  
pp. 539-556 ◽  
Author(s):  
Ziding Feng ◽  
Paula Diehr ◽  
Yutaka Yasui ◽  
Brent Evans ◽  
Shirley Beresford ◽  
...  

2020 ◽  
Vol 17 (3) ◽  
pp. 253-263 ◽  
Author(s):  
Monica Taljaard ◽  
Cory E Goldstein ◽  
Bruno Giraudeau ◽  
Stuart G Nicholls ◽  
Kelly Carroll ◽  
...  

Background: Novel rationales for randomizing clusters rather than individuals appear to be emerging from the push for more pragmatic trials, for example, to facilitate trial recruitment, reduce the costs of research, and improve external validity. Such rationales may be driven by a mistaken perception that choosing cluster randomization lessens the need for informed consent. We reviewed a random sample of published cluster randomized trials involving only individual-level health care interventions to determine (a) the prevalence of reporting a rationale for the choice of cluster randomization; (b) the types of explicit, or if absent, apparent rationales for the use of cluster randomization; (c) the prevalence of reporting patient informed consent for study interventions; and (d) the types of justifications provided for waivers of consent. We considered cluster randomized trials for evaluating exclusively the individual-level health care interventions to focus on clinical trials where individual randomization is only theoretically possible and where there is a general expectation of informed consent. Methods: A random sample of 40 cluster randomized trials were identified by implementing a validated electronic search filter in two electronic databases (Ovid MEDLINE and Embase), with two reviewers independently extracting information from each trial. Inclusion criteria were the following: primary report of a cluster randomized trial, evaluating exclusively an individual-level health care intervention, published between 2007 and 2016, and conducted in Canada, the United States, European Union, Australia, or low- and middle-income country settings. Results: Twenty-five trials (62.5%, 95% confidence interval = 47.5%–77.5%) reported an explicit rationale for the use of cluster randomization. The most commonly reported rationales were those with logistical or administrative convenience (15 trials, 60%) and those that need to avoid contamination (13 trials, 52%); five trials (20%) were cited rationales related to the push for more pragmatic trials. Twenty-one trials (52.5%, 95% confidence interval = 37%–68%) reported written informed consent for the intervention, two (5%) reported verbal consent, and eight (20%) reported waivers of consent, while in nine trials (22.5%) consent was unclear or not mentioned. Reported justifications for waivers of consent included that study interventions were already used in clinical practice, patients were not randomized individually, and the need to facilitate the pragmatic nature of the trial. Only one trial reported an explicit and appropriate justification for waiver of consent based on minimum criteria in international research ethics guidelines, namely, infeasibility and minimal risk. Conclusion: Rationales for adopting cluster over individual randomization and for adopting consent waivers are emerging, related to the need to facilitate pragmatic trials. Greater attention to clear reporting of study design rationales, informed consent procedures, as well as justification for waivers is needed to ensure that such trials meet appropriate ethical standards.


Sign in / Sign up

Export Citation Format

Share Document