Optimized adaptive enrichment designs for three-arm trials: learning which subpopulations benefit from different treatments

Biostatistics ◽  
2019 ◽  
Author(s):  
Jon Arni Steingrimsson ◽  
Joshua Betz ◽  
Tianchen Qian ◽  
Michael Rosenblum

Summary We consider the problem of designing a confirmatory randomized trial for comparing two treatments versus a common control in two disjoint subpopulations. The subpopulations could be defined in terms of a biomarker or disease severity measured at baseline. The goal is to determine which treatments benefit which subpopulations. We develop a new class of adaptive enrichment designs tailored to solving this problem. Adaptive enrichment designs involve a preplanned rule for modifying enrollment based on accruing data in an ongoing trial. At the interim analysis after each stage, for each subpopulation, the preplanned rule may decide to stop enrollment or to stop randomizing participants to one or more study arms. The motivation for this adaptive feature is that interim data may indicate that a subpopulation, such as those with lower disease severity at baseline, is unlikely to benefit from a particular treatment while uncertainty remains for the other treatment and/or subpopulation. We optimize these adaptive designs to have the minimum expected sample size under power and Type I error constraints. We compare the performance of the optimized adaptive design versus an optimized nonadaptive (single stage) design. Our approach is demonstrated in simulation studies that mimic features of a completed trial of a medical device for treating heart failure. The optimized adaptive design has $25\%$ smaller expected sample size compared to the optimized nonadaptive design; however, the cost is that the optimized adaptive design has $8\%$ greater maximum sample size. Open-source software that implements the trial design optimization is provided, allowing users to investigate the tradeoffs in using the proposed adaptive versus standard designs.

2021 ◽  
pp. 174077452110101
Author(s):  
Jennifer Proper ◽  
John Connett ◽  
Thomas Murray

Background: Bayesian response-adaptive designs, which data adaptively alter the allocation ratio in favor of the better performing treatment, are often criticized for engendering a non-trivial probability of a subject imbalance in favor of the inferior treatment, inflating type I error rate, and increasing sample size requirements. The implementation of these designs using the Thompson sampling methods has generally assumed a simple beta-binomial probability model in the literature; however, the effect of these choices on the resulting design operating characteristics relative to other reasonable alternatives has not been fully examined. Motivated by the Advanced R2 Eperfusion STrategies for Refractory Cardiac Arrest trial, we posit that a logistic probability model coupled with an urn or permuted block randomization method will alleviate some of the practical limitations engendered by the conventional implementation of a two-arm Bayesian response-adaptive design with binary outcomes. In this article, we discuss up to what extent this solution works and when it does not. Methods: A computer simulation study was performed to evaluate the relative merits of a Bayesian response-adaptive design for the Advanced R2 Eperfusion STrategies for Refractory Cardiac Arrest trial using the Thompson sampling methods based on a logistic regression probability model coupled with either an urn or permuted block randomization method that limits deviations from the evolving target allocation ratio. The different implementations of the response-adaptive design were evaluated for type I error rate control across various null response rates and power, among other performance metrics. Results: The logistic regression probability model engenders smaller average sample sizes with similar power, better control over type I error rate, and more favorable treatment arm sample size distributions than the conventional beta-binomial probability model, and designs using the alternative randomization methods have a negligible chance of a sample size imbalance in the wrong direction. Conclusion: Pairing the logistic regression probability model with either of the alternative randomization methods results in a much improved response-adaptive design in regard to important operating characteristics, including type I error rate control and the risk of a sample size imbalance in favor of the inferior treatment.


1992 ◽  
Vol 71 (1) ◽  
pp. 3-14 ◽  
Author(s):  
John E. Overall ◽  
Robert S. Atlas

A statistical model for combining p values from multiple tests of significance is used to define rejection and acceptance regions for two-stage and three-stage sampling plans. Type I error rates, power, frequencies of early termination decisions, and expected sample sizes are compared. Both the two-stage and three-stage procedures provide appropriate protection against Type I errors. The two-stage sampling plan with its single interim analysis entails minimal loss in power and provides substantial reduction in expected sample size as compared with a conventional single end-of-study test of significance for which power is in the adequate range. The three-stage sampling plan with its two interim analyses introduces somewhat greater reduction in power, but it compensates with greater reduction in expected sample size. Either interim-analysis strategy is more efficient than a single end-of-study analysis in terms of power per unit of sample size.


2017 ◽  
Vol 14 (3) ◽  
pp. 237-245 ◽  
Author(s):  
Luis A Crouch ◽  
Lori E Dodd ◽  
Michael A Proschan

Background and aims: Multi-arm, multi-stage trials have recently gained attention as a means to improve the efficiency of the clinical trials process. Many designs have been proposed, but few explicitly consider the inherent issue of multiplicity and the associated type I error rate inflation. It is our aim to propose a straightforward design that controls family-wise error rate while still providing improved efficiency. Methods: In this article, we provide an analytical method for calculating the family-wise error rate for a multi-arm, multi-stage trial and highlight the potential for considerable error rate inflation in uncontrolled designs. We propose a simple method to control the error rate that also allows for computation of power and expected sample size. Results: Family-wise error rate can be controlled in a variety of multi-arm, mutli-stage trial designs using our method. Additionally, our design can substantially decrease the expected sample size of a study while maintaining adequate power. Conclusion: Multi-arm, multi-stage designs have the potential to reduce the time and other resources spent on clinical trials. Our relatively simple design allows this to be achieved while weakly controlling family-wise error rate and without sacrificing much power.


2009 ◽  
Vol 9 (4) ◽  
pp. 280-287 ◽  
Author(s):  
Keith Dunnigan ◽  
Dennis W. King

2019 ◽  
Author(s):  
Rob Cribbie ◽  
Nataly Beribisky ◽  
Udi Alter

Many bodies recommend that a sample planning procedure, such as traditional NHST a priori power analysis, is conducted during the planning stages of a study. Power analysis allows the researcher to estimate how many participants are required in order to detect a minimally meaningful effect size at a specific level of power and Type I error rate. However, there are several drawbacks to the procedure that render it “a mess.” Specifically, the identification of the minimally meaningful effect size is often difficult but unavoidable for conducting the procedure properly, the procedure is not precision oriented, and does not guide the researcher to collect as many participants as feasibly possible. In this study, we explore how these three theoretical issues are reflected in applied psychological research in order to better understand whether these issues are concerns in practice. To investigate how power analysis is currently used, this study reviewed the reporting of 443 power analyses in high impact psychology journals in 2016 and 2017. It was found that researchers rarely use the minimally meaningful effect size as a rationale for the chosen effect in a power analysis. Further, precision-based approaches and collecting the maximum sample size feasible are almost never used in tandem with power analyses. In light of these findings, we offer that researchers should focus on tools beyond traditional power analysis when sample planning, such as collecting the maximum sample size feasible.


2020 ◽  
Vol 6 (2) ◽  
pp. 106-113
Author(s):  
A. M. Grjibovski ◽  
M. A. Gorbatova ◽  
A. N. Narkevich ◽  
K. A. Vinogradov

Sample size calculation in a planning phase is still uncommon in Russian research practice. This situation threatens validity of the conclusions and may introduce Type I error when the false null hypothesis is accepted due to lack of statistical power to detect the existing difference between the means. Comparing two means using unpaired Students’ ttests is the most common statistical procedure in the Russian biomedical literature. However, calculations of the minimal required sample size or retrospective calculation of the statistical power were observed only in very few publications. In this paper we demonstrate how to calculate required sample size for comparing means in unpaired samples using WinPepi and Stata software. In addition, we produced tables for minimal required sample size for studies when two means have to be compared and body mass index and blood pressure are the variables of interest. The tables were constructed for unpaired samples for different levels of statistical power and standard deviations obtained from the literature.


Author(s):  
Shengjie Liu ◽  
Jun Gao ◽  
Yuling Zheng ◽  
Lei Huang ◽  
Fangrong Yan

AbstractBioequivalence (BE) studies are an integral component of new drug development process, and play an important role in approval and marketing of generic drug products. However, existing design and evaluation methods are basically under the framework of frequentist theory, while few implements Bayesian ideas. Based on the bioequivalence predictive probability model and sample re-estimation strategy, we propose a new Bayesian two-stage adaptive design and explore its application in bioequivalence testing. The new design differs from existing two-stage design (such as Potvin’s method B, C) in the following aspects. First, it not only incorporates historical information and expert information, but further combines experimental data flexibly to aid decision-making. Secondly, its sample re-estimation strategy is based on the ratio of the information in interim analysis to total information, which is simpler in calculation than the Potvin’s method. Simulation results manifested that the two-stage design can be combined with various stop boundary functions, and the results are different. Moreover, the proposed method saves sample size compared to the Potvin’s method under the conditions that type I error rate is below 0.05 and statistical power reaches 80 %.


2018 ◽  
Vol 15 (5) ◽  
pp. 452-461 ◽  
Author(s):  
Satrajit Roychoudhury ◽  
Nicolas Scheuer ◽  
Beat Neuenschwander

Background Well-designed phase II trials must have acceptable error rates relative to a pre-specified success criterion, usually a statistically significant p-value. Such standard designs may not always suffice from a clinical perspective because clinical relevance may call for more. For example, proof-of-concept in phase II often requires not only statistical significance but also a sufficiently large effect estimate. Purpose We propose dual-criterion designs to complement statistical significance with clinical relevance, discuss their methodology, and illustrate their implementation in phase II. Methods Clinical relevance requires the effect estimate to pass a clinically motivated threshold (the decision value (DV)). In contrast to standard designs, the required effect estimate is an explicit design input, whereas study power is implicit. The sample size for a dual-criterion design needs careful considerations of the study’s operating characteristics (type I error, power). Results Dual-criterion designs are discussed for a randomized controlled and a single-arm phase II trial, including decision criteria, sample size calculations, decisions under various data scenarios, and operating characteristics. The designs facilitate GO/NO-GO decisions due to their complementary statistical–clinical criterion. Limitations While conceptually simple, implementing a dual-criterion design needs care. The clinical DV must be elicited carefully in collaboration with clinicians, and understanding similarities and differences to a standard design is crucial. Conclusion To improve evidence-based decision-making, a formal yet transparent quantitative framework is important. Dual-criterion designs offer an appealing statistical–clinical compromise, which may be preferable to standard designs if evidence against the null hypothesis alone does not suffice for an efficacy claim.


2017 ◽  
Vol 36 (9) ◽  
pp. 1383-1394 ◽  
Author(s):  
Samuel Litwin ◽  
Stanley Basickes ◽  
Eric A. Ross
Keyword(s):  
Type I ◽  
Phase 2 ◽  

Sign in / Sign up

Export Citation Format

Share Document