scholarly journals Power analyses for moderator effects with (non)randomly varying slopes in cluster randomized trials

Methodology ◽  
2021 ◽  
Vol 17 (2) ◽  
pp. 92-110
Author(s):  
Nianbo Dong ◽  
Jessaca Spybrook ◽  
Benjamin Kelcey ◽  
Metin Bulus

Researchers often apply moderation analyses to examine whether the effects of an intervention differ conditional on individual or cluster moderator variables such as gender, pretest, or school size. This study develops formulas for power analyses to detect moderator effects in two-level cluster randomized trials (CRTs) using hierarchical linear models. We derive the formulas for estimating statistical power, minimum detectable effect size difference and 95% confidence intervals for cluster- and individual-level moderators. Our framework accommodates binary or continuous moderators, designs with or without covariates, and effects of individual-level moderators that vary randomly or nonrandomly across clusters. A small Monte Carlo simulation confirms the accuracy of our formulas. We also compare power between main effect analysis and moderation analysis, discuss the effects of mis-specification of the moderator slope (randomly vs. non-randomly varying), and conclude with directions for future research. We provide software for conducting a power analysis of moderator effects in CRTs.

Author(s):  
John A. Gallis ◽  
Fan Li ◽  
Elizabeth L. Turner

Cluster randomized trials, where clusters (for example, schools or clinics) are randomized to comparison arms but measurements are taken on individuals, are commonly used to evaluate interventions in public health, education, and the social sciences. Analysis is often conducted on individual-level outcomes, and such analysis methods must consider that outcomes for members of the same cluster tend to be more similar than outcomes for members of other clusters. A popular individual-level analysis technique is generalized estimating equations (GEE). However, it is common to randomize a small number of clusters (for example, 30 or fewer), and in this case, the GEE standard errors obtained from the sandwich variance estimator will be biased, leading to inflated type I errors. Some bias-corrected standard errors have been proposed and studied to account for this finite-sample bias, but none has yet been implemented in Stata. In this article, we describe several popular bias corrections to the robust sandwich variance. We then introduce our newly created command, xtgeebcv, which will allow Stata users to easily apply finite-sample corrections to standard errors obtained from GEE models. We then provide examples to demonstrate the use of xtgeebcv. Finally, we discuss suggestions about which finite-sample corrections to use in which situations and consider areas of future research that may improve xtgeebcv.


2020 ◽  
Vol 42 (3) ◽  
pp. 354-374
Author(s):  
Jessaca Spybrook ◽  
Qi Zhang ◽  
Ben Kelcey ◽  
Nianbo Dong

Over the past 15 years, we have seen an increase in the use of cluster randomized trials (CRTs) to test the efficacy of educational interventions. These studies are often designed with the goal of determining whether a program works, or answering the what works question. Recently, the goals of these studies expanded to include for whom and under what conditions an intervention is effective. In this study, we examine the capacity of a set of CRTs to provide rigorous evidence about for whom and under what conditions an intervention is effective. The findings suggest that studies are more likely to be designed with the capacity to detect potentially meaningful individual-level moderator effects, for example, gender, than cluster-level moderator effects, for example, school type.


2016 ◽  
Vol 41 (6) ◽  
pp. 605-627 ◽  
Author(s):  
Jessaca Spybrook ◽  
Benjamin Kelcey ◽  
Nianbo Dong

Recently, there has been an increase in the number of cluster randomized trials (CRTs) to evaluate the impact of educational programs and interventions. These studies are often powered for the main effect of treatment to address the “what works” question. However, program effects may vary by individual characteristics or by context, making it important to also consider power to detect moderator effects. This article presents a framework for calculating statistical power for moderator effects at all levels for two- and three-level CRTs. Annotated R code is included to make the calculations accessible to researchers and increase the regularity in which a priori power analyses for moderator effects in CRTs are conducted.


AERA Open ◽  
2020 ◽  
Vol 6 (3) ◽  
pp. 233285842093952
Author(s):  
Qi Zhang ◽  
Jessaca Spybrook ◽  
Fatih Unlu

With the increasing demand for evidence-based research on teacher effectiveness and improving student achievement, more impact studies are being conducted to examine the effectiveness of professional development (PD) interventions. Cluster randomized trials (CRTs) are often carried out to assess PD interventions that aim to improve both teacher and student outcomes. Due to the different design parameters (i.e., intraclass correlation and R2) and benchmark effect sizes associated with the student and teacher outcomes, two power analyses are necessary for planning CRTs that aim to detect both teacher and student effects in one study. These two power analyses are often conducted separately without considering how design choices to power the study to detect student effects may affect design choices to power the study to detect teacher effects and vice versa. In this study, we consider strategies to maximize the efficiency of the study design when both student and teacher effects are of primary interest.


2020 ◽  
pp. 107699862096149
Author(s):  
Nianbo Dong ◽  
Benjamin Kelcey ◽  
Jessaca Spybrook

Past research has demonstrated that treatment effects frequently vary across sites (e.g., schools) and that such variation can be explained by site-level or individual-level variables (e.g., school size or gender). The purpose of this study is to develop a statistical framework and tools for the effective and efficient design of multisite randomized trials (MRTs) probing moderated treatment effects. The framework considers three core facets of such designs: (a) Level 1 and Level 2 moderators, (b) random and nonrandomly varying slopes (coefficients) of the treatment variable and its interaction terms with the moderators, and (c) binary and continuous moderators. We validate the formulas for calculating statistical power and the minimum detectable effect size difference with simulations, probe its sensitivity to model assumptions, execute the formulas in accessible software, demonstrate an application, and provide suggestions in designing MRTs probing moderated treatment effects.


2020 ◽  
Vol 45 (4) ◽  
pp. 446-474
Author(s):  
Zuchao Shen ◽  
Benjamin Kelcey

Conventional optimal design frameworks consider a narrow range of sampling cost structures that thereby constrict their capacity to identify the most powerful and efficient designs. We relax several constraints of previous optimal design frameworks by allowing for variable sampling costs in cluster-randomized trials. The proposed framework introduces additional design considerations and has the potential to identify designs with more statistical power, even when some parameters are constrained due to immutable practical concerns. The results also suggest that the gains in efficiency introduced through the expanded framework are fairly robust to misspecifications of the expanded cost structure and concomitant design parameters (e.g., intraclass correlation coefficient). The proposed framework is implemented in the R package odr.


2020 ◽  
Vol 17 (3) ◽  
pp. 253-263 ◽  
Author(s):  
Monica Taljaard ◽  
Cory E Goldstein ◽  
Bruno Giraudeau ◽  
Stuart G Nicholls ◽  
Kelly Carroll ◽  
...  

Background: Novel rationales for randomizing clusters rather than individuals appear to be emerging from the push for more pragmatic trials, for example, to facilitate trial recruitment, reduce the costs of research, and improve external validity. Such rationales may be driven by a mistaken perception that choosing cluster randomization lessens the need for informed consent. We reviewed a random sample of published cluster randomized trials involving only individual-level health care interventions to determine (a) the prevalence of reporting a rationale for the choice of cluster randomization; (b) the types of explicit, or if absent, apparent rationales for the use of cluster randomization; (c) the prevalence of reporting patient informed consent for study interventions; and (d) the types of justifications provided for waivers of consent. We considered cluster randomized trials for evaluating exclusively the individual-level health care interventions to focus on clinical trials where individual randomization is only theoretically possible and where there is a general expectation of informed consent. Methods: A random sample of 40 cluster randomized trials were identified by implementing a validated electronic search filter in two electronic databases (Ovid MEDLINE and Embase), with two reviewers independently extracting information from each trial. Inclusion criteria were the following: primary report of a cluster randomized trial, evaluating exclusively an individual-level health care intervention, published between 2007 and 2016, and conducted in Canada, the United States, European Union, Australia, or low- and middle-income country settings. Results: Twenty-five trials (62.5%, 95% confidence interval = 47.5%–77.5%) reported an explicit rationale for the use of cluster randomization. The most commonly reported rationales were those with logistical or administrative convenience (15 trials, 60%) and those that need to avoid contamination (13 trials, 52%); five trials (20%) were cited rationales related to the push for more pragmatic trials. Twenty-one trials (52.5%, 95% confidence interval = 37%–68%) reported written informed consent for the intervention, two (5%) reported verbal consent, and eight (20%) reported waivers of consent, while in nine trials (22.5%) consent was unclear or not mentioned. Reported justifications for waivers of consent included that study interventions were already used in clinical practice, patients were not randomized individually, and the need to facilitate the pragmatic nature of the trial. Only one trial reported an explicit and appropriate justification for waiver of consent based on minimum criteria in international research ethics guidelines, namely, infeasibility and minimal risk. Conclusion: Rationales for adopting cluster over individual randomization and for adopting consent waivers are emerging, related to the need to facilitate pragmatic trials. Greater attention to clear reporting of study design rationales, informed consent procedures, as well as justification for waivers is needed to ensure that such trials meet appropriate ethical standards.


Sign in / Sign up

Export Citation Format

Share Document