Faculty Opinions recommendation of Adaptive designs in clinical trials: why use them, and how to run and report them.

Author(s):  
Carla Greenbaum ◽  
Sandra Lord
2018 ◽  
Vol 28 (6) ◽  
pp. 1609-1621
Author(s):  
Xiaoming Li ◽  
Jianhui Zhou ◽  
Feifang Hu

Covariate-adaptive designs are widely used to balance covariates and maintain randomization in clinical trials. Adaptive designs for discrete covariates and their asymptotic properties have been well studied in the literature. However, important continuous covariates are often involved in clinical studies. Simply discretizing or categorizing continuous covariates can result in loss of information. The current understanding of adaptive designs with continuous covariates lacks a theoretical foundation as the existing works are entirely based on simulations. Consequently, conventional hypothesis testing in clinical trials using continuous covariates is still not well understood. In this paper, we establish a theoretical framework for hypothesis testing on adaptive designs with continuous covariates based on linear models. For testing treatment effects and significance of covariates, we obtain the asymptotic distributions of the test statistic under null and alternative hypotheses. Simulation studies are conducted under a class of covariate-adaptive designs, including the p-value-based method, the Su’s percentile method, the empirical cumulative-distribution method, the Kullback–Leibler divergence method, and the kernel-density method. Key findings about adaptive designs with independent covariates based on linear models are (1) hypothesis testing that compares treatment effects are conservative in terms of smaller type I error, (2) hypothesis testing using adaptive designs outperforms complete randomization method in terms of power, and (3) testing on significance of covariates is still valid.


2021 ◽  
Author(s):  
Elja Arjas ◽  
Dario Gasbarra

Abstract Background: Adaptive designs offer added flexibility in the execution of clinical trials, including the possibilities of allocating more patients to the treatments that turned out more successful, and early stopping due to either declared success or futility. Commonly applied adaptive designs, such as group sequential methods, are based on the frequentist paradigm and on ideas from statistical significance testing. Interim checks during the trial will have the effect of inflating the Type 1 error rate, or, if this rate is controlled and kept fixed, lowering the power. Results: The purpose of the paper is to demonstrate the usefulness of the Bayesian approach in the design and in the actual running of randomized clinical trials during Phase II and III. This approach is based on comparing the performance of the different treatment arm in terms of the respective joint posterior probabilities evaluated sequentially from the accruing outcome data, and then taking a control action if such posterior probabilities fall below a pre-specified critical threshold value. Two types of actions are considered: treatment allocation, putting on hold at least temporarily further accrual of patients to a treatment arm (Rule 1), and treatment selection, removing an arm from the trial permanently (Rule 2). The main development in the paper is in terms of binary outcomes, but extensions for handling time-to-event data, including data from vaccine trials, are also discussed. The performance of the proposed methodology is tested in extensive simulation experiments, with numerical results and graphical illustrations documented in a Supplement to the main text. As a companion to this paper, an implementation of the methods is provided in the form of a freely available R package. Conclusion: The proposed methods for trial design provide an attractive alternative to their frequentist counterparts.


1993 ◽  
Vol 14 (6) ◽  
pp. 471-484 ◽  
Author(s):  
William F. Rosenberger ◽  
John M. Lachin

2008 ◽  
Vol 29 (Special_Issue_1) ◽  
pp. S33-S52
Author(s):  
Kenkichi Sugiura ◽  
Hiroyuki Uesaka

2019 ◽  
Author(s):  
Elizabeth Ryan ◽  
Kristian Brock ◽  
Simon Gates ◽  
Daniel Slade

Abstract Background Bayesian adaptive methods are increasingly being used to design clinical trials and offer a number of advantages over traditional approaches. Decisions at analysis points are usually based on the posterior distribution of the parameter of interest. However, there is some confusion amongst statisticians and trialists as to whether control of type I error is required for Bayesian adaptive designs as this is a frequentist concept. Methods We discuss the arguments for and against adjusting for multiplicities in Bayesian trials with interim analyses. We present two case studies demonstrating the effect on type I/II error rates of including interim analyses in Bayesian clinical trials. We propose alternative approaches to adjusting stopping boundaries to control type I error, and also alternative methods for decision-making in Bayesian clinical trials. Results In both case studies we found that the type I error was inflated in the Bayesian adaptive designs through incorporation of interim analyses that allowed early stopping for efficacy and do not make adjustments to account for multiplicity. Incorporation of early stopping for efficacy also increased the power in some instances. An increase in the number of interim analyses that only allowed early stopping for futility decreased the type I error, but also decreased power. An increase in the number of interim analyses that allowed for either early stopping for efficacy or futility generally increased type I error and decreased power. Conclusions If one wishes to demonstrate control of type I error in Bayesian adaptive designs then adjustments to the stopping boundaries are usually required for designs that allow for early stopping for efficacy as the number of analyses increase. If the designs only allow for early stopping for futility then adjustments to the stopping boundaries are not needed to control type I error, but may be required to ensure adequate power. If one instead uses a strict Bayesian approach then type I errors could be ignored and the designs could instead focus on the posterior probabilities of treatment effects of particular values.


2020 ◽  
Author(s):  
Elizabeth Ryan ◽  
Kristian Brock ◽  
Simon Gates ◽  
Daniel Slade

Abstract Background: Bayesian adaptive methods are increasingly being used to design clinical trials and offer several advantages over traditional approaches. Decisions at analysis points are usually based on the posterior distribution of the treatment effect. However, there is some confusion as to whether control of type I error is required for Bayesian designs as this is a frequentist concept.Methods: We discuss the arguments for and against adjusting for multiplicities in Bayesian trials with interim analyses. With two case studies we illustrate the effect of including interim analyses on type I/II error rates in Bayesian clinical trials where no adjustments for multiplicities are made. We propose several approaches to control type I error, and also alternative methods for decision-making in Bayesian clinical trials.Results: In both case studies we demonstrated that the type I error was inflated in the Bayesian adaptive designs through incorporation of interim analyses that allowed early stopping for efficacy and without adjustments to account for multiplicity. Incorporation of early stopping for efficacy also increased the power in some instances. An increase in the number of interim analyses that only allowed early stopping for futility decreased the type I error, but also decreased power. An increase in the number of interim analyses that allowed for either early stopping for efficacy or futility generally increased type I error and decreased power.Conclusions: Currently, regulators require demonstration of control of type I error for both frequentist and Bayesian adaptive designs, particularly for late-phase trials. To demonstrate control of type I error in Bayesian adaptive designs, adjustments to the stopping boundaries are usually required for designs that allow for early stopping for efficacy as the number of analyses increase. If the designs only allow for early stopping for futility then adjustments to the stopping boundaries are not needed to control type I error. If one instead uses a strict Bayesian approach, which is currently more accepted in the design and analysis of exploratory trials, then type I errors could be ignored and the designs could instead focus on the posterior probabilities of treatment effects of clinically-relevant values.


Sign in / Sign up

Export Citation Format

Share Document