scholarly journals Optimal design of cluster randomised trials with continuous recruitment and prospective baseline period

2021 ◽  
Vol 18 (2) ◽  
pp. 147-157
Author(s):  
Richard Hooper ◽  
Andrew J Copas

Background: Cluster randomised trials, like individually randomised trials, may benefit from a baseline period of data collection. We consider trials in which clusters prospectively recruit or identify participants as a continuous process over a given calendar period, and ask whether and for how long investigators should collect baseline data as part of the trial, in order to maximise precision. Methods: We show how to calculate and plot the variance of the treatment effect estimator for different lengths of baseline period in a range of scenarios, and offer general advice. Results: In some circumstances it is optimal not to include a baseline, while in others there is an optimal duration for the baseline. All other things being equal, the circumstances where it is preferable not to include a baseline period are those with a smaller recruitment rate, smaller intracluster correlation, greater decay in the intracluster correlation over time, or wider transition period between recruitment under control and intervention conditions. Conclusion: The variance of the treatment effect estimator can be calculated numerically, and plotted against the duration of baseline to inform design. It would be of interest to extend these investigations to cluster randomised trial designs with more than two randomised sequences of control and intervention condition, including stepped wedge designs.

2017 ◽  
Vol 14 (5) ◽  
pp. 507-517 ◽  
Author(s):  
Michael J Grayling ◽  
James MS Wason ◽  
Adrian P Mander

Background/Aims: The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Methods: Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. Results: We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial’s type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. Conclusion: The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into stepped-wedge cluster randomised trials according to the needs of the particular trial.


2019 ◽  
Vol 17 (1) ◽  
pp. 69-76
Author(s):  
Andrew J Copas ◽  
Richard Hooper

Background/Aims: Published methods for sample size calculation for cluster randomised trials with baseline data are inflexible and primarily assume an equal amount of data collected at baseline and endline, that is, before and after the intervention has been implemented in some clusters. We extend these methods to any amount of baseline and endline data. We explain how to explore sample size for a trial if some baseline data from the trial clusters have already been collected as part of a separate study. Where such data aren’t available, we show how to choose the proportion of data collection devoted to the baseline within the trial, when a particular cluster size or range of cluster sizes is proposed. Methods: We provide a design effect given the cluster size and correlation parameters, assuming different participants are assessed at baseline and endline in the same clusters. We show how to produce plots to identify the impact of varying the amount of baseline data accounting for the inevitable uncertainty in the cluster autocorrelation. We illustrate the methodology using an example trial. Results: Baseline data provide more power, or allow a greater reduction in trial size, with greater values of the cluster size, intracluster correlation and cluster autocorrelation. Conclusion: Investigators should think carefully before collecting baseline data in a cluster randomised trial if this is at the expense of endline data. In some scenarios, this will increase the sample size required to achieve given power and precision.


BMJ Open ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. e054213
Author(s):  
Hayden P Nix ◽  
Charles Weijer ◽  
Jamie C Brehaut ◽  
David Forster ◽  
Cory E Goldstein ◽  
...  

In a cluster randomised trial (CRT), intact groups—such as communities, clinics or schools—are randomised to the study intervention or control conditions. The issue of informed consent in CRTs has been particularly challenging for researchers and research ethics committees. Some argue that cluster randomisation is a reason not to seek informed consent from research participants. In fact, systematic reviews have found that, relative to individually randomised trials, CRTs are associated with an increased likelihood of inadequate reporting of consent procedures and inappropriate use of waivers of consent. The objective of this paper is to clarify this confusion by providing a practical and useful framework to guide researchers and research ethics committees through consent issues in CRTs. In CRTs, it is the unit of intervention—not the unit of randomisation—that drives informed consent issues. We explicate a three-step framework for thinking through informed consent in CRTs: (1) identify research participants, (2) identify the study element(s) to which research participants are exposed, and (3) determine if a waiver of consent is appropriate for each study element. We then apply our framework to examples of CRTs of cluster-level, professional-level and individual-level interventions, and provide key lessons on informed consent for each type of CRT.


2019 ◽  
Vol 46 (1) ◽  
pp. 31-33 ◽  
Author(s):  
Charles Weijer ◽  
Monica Taljaard

In this issue of JME, Watson et al call for research evaluation of government health programmes and identify ethical guidance, including the Ottawa Statement on the ethical design and conduct of cluster randomised trials, as a hindrance. While cluster randomised trials of health programmes as a whole should be evaluated by research ethics committees (RECs), Watson et al argue that the health programme per se is not within the researcher’s control or responsibility and, thus, is out of scope for ethics review. We argue that this view is wrong. The scope of research ethics review is not defined by researcher control or responsibility, but rather by the protection of research participants. And the randomised evaluation of health programmes impacts the liberty and welfare interests of participants insofar as they may be exposed to a harmful programme or denied access to a beneficial one. Further, Watson et al’s claim that ‘study programmes … would occur whether or not there were any … research activities’ is incorrect in the case of cluster randomised designs. In a cluster randomised trial, the government does not implement a programme as usual. Rather, researchers collaborate with the government to randomise clusters to intervention or control conditions in order to rigorously evaluate the programme. As a result, equipoise issues are triggered that must be addressed by the REC.


2018 ◽  
Vol 28 (10-11) ◽  
pp. 3112-3122 ◽  
Author(s):  
Jessica Kasza ◽  
Andrew B Forbes

Multiple-period cluster randomised trials, such as stepped wedge or cluster cross-over trials, are being conducted with increasing frequency. In the design and analysis of these trials, it is necessary to specify the form of the within-cluster correlation structure, and a common assumption is that the correlation between the outcomes of any pair of subjects within a cluster is identical. More complex models that allow for correlations within a cluster to decay over time have recently been suggested. However, most software packages cannot fit these models. As a result, practitioners may choose a simpler model. We analytically examine the impact of incorrectly omitting a decay in correlation on the variance of the treatment effect estimator and show that misspecification of the within-cluster correlation structure can lead to incorrect conclusions regarding estimated treatment effects for stepped wedge and cluster crossover trials.


2017 ◽  
Vol 28 (3) ◽  
pp. 703-716 ◽  
Author(s):  
J Kasza ◽  
K Hemming ◽  
R Hooper ◽  
JNS Matthews ◽  
AB Forbes

Stepped wedge and cluster randomised crossover trials are examples of cluster randomised designs conducted over multiple time periods that are being used with increasing frequency in health research. Recent systematic reviews of both of these designs indicate that the within-cluster correlation is typically taken account of in the analysis of data using a random intercept mixed model, implying a constant correlation between any two individuals in the same cluster no matter how far apart in time they are measured: within-period and between-period intra-cluster correlations are assumed to be identical. Recently proposed extensions allow the within- and between-period intra-cluster correlations to differ, although these methods require that all between-period intra-cluster correlations are identical, which may not be appropriate in all situations. Motivated by a proposed intensive care cluster randomised trial, we propose an alternative correlation structure for repeated cross-sectional multiple-period cluster randomised trials in which the between-period intra-cluster correlation is allowed to decay depending on the distance between measurements. We present results for the variance of treatment effect estimators for varying amounts of decay, investigating the consequences of the variation in decay on sample size planning for stepped wedge, cluster crossover and multiple-period parallel-arm cluster randomised trials. We also investigate the impact of assuming constant between-period intra-cluster correlations instead of decaying between-period intra-cluster correlations. Our results indicate that in certain design configurations, including the one corresponding to the proposed trial, a correlation decay can have an important impact on variances of treatment effect estimators, and hence on sample size and power. An R Shiny app allows readers to interactively explore the impact of correlation decay.


2021 ◽  
pp. 096228022110370
Author(s):  
Jen Lewis ◽  
Steven A Julious

Sample size calculations for cluster-randomised trials require inclusion of an inflation factor taking into account the intra-cluster correlation coefficient. Often, estimates of the intra-cluster correlation coefficient are taken from pilot trials, which are known to have uncertainty about their estimation. Given that the value of the intra-cluster correlation coefficient has a considerable influence on the calculated sample size for a main trial, the uncertainty in the estimate can have a large impact on the ultimate sample size and consequently, the power of a main trial. As such, it is important to account for the uncertainty in the estimate of the intra-cluster correlation coefficient. While a commonly adopted approach is to utilise the upper confidence limit in the sample size calculation, this is a largely inefficient method which can result in overpowered main trials. In this paper, we present a method of estimating the sample size for a main cluster-randomised trial with a continuous outcome, using numerical methods to account for the uncertainty in the intra-cluster correlation coefficient estimate. Despite limitations with this initial study, the findings and recommendations in this paper can help to improve sample size estimations for cluster randomised controlled trials by accounting for uncertainty in the estimate of the intra-cluster correlation coefficient. We recommend this approach be applied to all trials where there is uncertainty in the intra-cluster correlation coefficient estimate, in conjunction with additional sources of information to guide the estimation of the intra-cluster correlation coefficient.


2021 ◽  
pp. 096228022110260
Author(s):  
Ariane M Mbekwe Yepnang ◽  
Agnès Caille ◽  
Sandra M Eldridge ◽  
Bruno Giraudeau

In cluster randomised trials, a measure of intracluster correlation such as the intraclass correlation coefficient (ICC) should be reported for each primary outcome. Providing intracluster correlation estimates may help in calculating sample size of future cluster randomised trials and also in interpreting the results of the trial from which they are derived. For a binary outcome, the ICC is known to be associated with its prevalence, which raises at least two issues. First, it questions the use of ICC estimates obtained on a binary outcome in a trial for sample size calculations in a subsequent trial in which the same binary outcome is expected to have a different prevalence. Second, it challenges the interpretation of ICC estimates because they do not solely depend on clustering level. Other intracluster correlation measures proposed for clustered binary data settings include the variance partition coefficient, the median odds ratio and the tetrachoric correlation coefficient. Under certain assumptions, the theoretical maximum possible value for an ICC associated with a binary outcome can be derived, and we proposed the relative deviation of an ICC estimate to this maximum value as another measure of the intracluster correlation. We conducted a simulation study to explore the dependence of these intracluster correlation measures on outcome prevalence and found that all are associated with prevalence. Even if all depend on prevalence, the tetrachoric correlation coefficient computed with Kirk’s approach was less dependent on the outcome prevalence than the other measures when the intracluster correlation was about 0.05. We also observed that for lower values, such as 0.01, the analysis of variance estimator of the ICC is preferred.


Sign in / Sign up

Export Citation Format

Share Document