scholarly journals Two-sample binary phase 2 trials with low type I error and low sample size

2017 ◽  
Vol 36 (9) ◽  
pp. 1383-1394 ◽  
Author(s):  
Samuel Litwin ◽  
Stanley Basickes ◽  
Eric A. Ross
Keyword(s):  
Type I ◽  
Phase 2 ◽  
2017 ◽  
Vol 36 (21) ◽  
pp. 3439-3439
Author(s):  
Samuel Litwin ◽  
Eric Ross ◽  
Stanley Basickes
Keyword(s):  
Type I ◽  
Phase 2 ◽  

2021 ◽  
pp. 174077452110101
Author(s):  
Jennifer Proper ◽  
John Connett ◽  
Thomas Murray

Background: Bayesian response-adaptive designs, which data adaptively alter the allocation ratio in favor of the better performing treatment, are often criticized for engendering a non-trivial probability of a subject imbalance in favor of the inferior treatment, inflating type I error rate, and increasing sample size requirements. The implementation of these designs using the Thompson sampling methods has generally assumed a simple beta-binomial probability model in the literature; however, the effect of these choices on the resulting design operating characteristics relative to other reasonable alternatives has not been fully examined. Motivated by the Advanced R2 Eperfusion STrategies for Refractory Cardiac Arrest trial, we posit that a logistic probability model coupled with an urn or permuted block randomization method will alleviate some of the practical limitations engendered by the conventional implementation of a two-arm Bayesian response-adaptive design with binary outcomes. In this article, we discuss up to what extent this solution works and when it does not. Methods: A computer simulation study was performed to evaluate the relative merits of a Bayesian response-adaptive design for the Advanced R2 Eperfusion STrategies for Refractory Cardiac Arrest trial using the Thompson sampling methods based on a logistic regression probability model coupled with either an urn or permuted block randomization method that limits deviations from the evolving target allocation ratio. The different implementations of the response-adaptive design were evaluated for type I error rate control across various null response rates and power, among other performance metrics. Results: The logistic regression probability model engenders smaller average sample sizes with similar power, better control over type I error rate, and more favorable treatment arm sample size distributions than the conventional beta-binomial probability model, and designs using the alternative randomization methods have a negligible chance of a sample size imbalance in the wrong direction. Conclusion: Pairing the logistic regression probability model with either of the alternative randomization methods results in a much improved response-adaptive design in regard to important operating characteristics, including type I error rate control and the risk of a sample size imbalance in favor of the inferior treatment.


2009 ◽  
Vol 9 (4) ◽  
pp. 280-287 ◽  
Author(s):  
Keith Dunnigan ◽  
Dennis W. King

2019 ◽  
Author(s):  
Rob Cribbie ◽  
Nataly Beribisky ◽  
Udi Alter

Many bodies recommend that a sample planning procedure, such as traditional NHST a priori power analysis, is conducted during the planning stages of a study. Power analysis allows the researcher to estimate how many participants are required in order to detect a minimally meaningful effect size at a specific level of power and Type I error rate. However, there are several drawbacks to the procedure that render it “a mess.” Specifically, the identification of the minimally meaningful effect size is often difficult but unavoidable for conducting the procedure properly, the procedure is not precision oriented, and does not guide the researcher to collect as many participants as feasibly possible. In this study, we explore how these three theoretical issues are reflected in applied psychological research in order to better understand whether these issues are concerns in practice. To investigate how power analysis is currently used, this study reviewed the reporting of 443 power analyses in high impact psychology journals in 2016 and 2017. It was found that researchers rarely use the minimally meaningful effect size as a rationale for the chosen effect in a power analysis. Further, precision-based approaches and collecting the maximum sample size feasible are almost never used in tandem with power analyses. In light of these findings, we offer that researchers should focus on tools beyond traditional power analysis when sample planning, such as collecting the maximum sample size feasible.


2020 ◽  
Vol 6 (2) ◽  
pp. 106-113
Author(s):  
A. M. Grjibovski ◽  
M. A. Gorbatova ◽  
A. N. Narkevich ◽  
K. A. Vinogradov

Sample size calculation in a planning phase is still uncommon in Russian research practice. This situation threatens validity of the conclusions and may introduce Type I error when the false null hypothesis is accepted due to lack of statistical power to detect the existing difference between the means. Comparing two means using unpaired Students’ ttests is the most common statistical procedure in the Russian biomedical literature. However, calculations of the minimal required sample size or retrospective calculation of the statistical power were observed only in very few publications. In this paper we demonstrate how to calculate required sample size for comparing means in unpaired samples using WinPepi and Stata software. In addition, we produced tables for minimal required sample size for studies when two means have to be compared and body mass index and blood pressure are the variables of interest. The tables were constructed for unpaired samples for different levels of statistical power and standard deviations obtained from the literature.


Biostatistics ◽  
2019 ◽  
Author(s):  
Jon Arni Steingrimsson ◽  
Joshua Betz ◽  
Tianchen Qian ◽  
Michael Rosenblum

Summary We consider the problem of designing a confirmatory randomized trial for comparing two treatments versus a common control in two disjoint subpopulations. The subpopulations could be defined in terms of a biomarker or disease severity measured at baseline. The goal is to determine which treatments benefit which subpopulations. We develop a new class of adaptive enrichment designs tailored to solving this problem. Adaptive enrichment designs involve a preplanned rule for modifying enrollment based on accruing data in an ongoing trial. At the interim analysis after each stage, for each subpopulation, the preplanned rule may decide to stop enrollment or to stop randomizing participants to one or more study arms. The motivation for this adaptive feature is that interim data may indicate that a subpopulation, such as those with lower disease severity at baseline, is unlikely to benefit from a particular treatment while uncertainty remains for the other treatment and/or subpopulation. We optimize these adaptive designs to have the minimum expected sample size under power and Type I error constraints. We compare the performance of the optimized adaptive design versus an optimized nonadaptive (single stage) design. Our approach is demonstrated in simulation studies that mimic features of a completed trial of a medical device for treating heart failure. The optimized adaptive design has $25\%$ smaller expected sample size compared to the optimized nonadaptive design; however, the cost is that the optimized adaptive design has $8\%$ greater maximum sample size. Open-source software that implements the trial design optimization is provided, allowing users to investigate the tradeoffs in using the proposed adaptive versus standard designs.


1992 ◽  
Vol 71 (1) ◽  
pp. 3-14 ◽  
Author(s):  
John E. Overall ◽  
Robert S. Atlas

A statistical model for combining p values from multiple tests of significance is used to define rejection and acceptance regions for two-stage and three-stage sampling plans. Type I error rates, power, frequencies of early termination decisions, and expected sample sizes are compared. Both the two-stage and three-stage procedures provide appropriate protection against Type I errors. The two-stage sampling plan with its single interim analysis entails minimal loss in power and provides substantial reduction in expected sample size as compared with a conventional single end-of-study test of significance for which power is in the adequate range. The three-stage sampling plan with its two interim analyses introduces somewhat greater reduction in power, but it compensates with greater reduction in expected sample size. Either interim-analysis strategy is more efficient than a single end-of-study analysis in terms of power per unit of sample size.


2017 ◽  
Vol 35 (15_suppl) ◽  
pp. 6017-6017 ◽  
Author(s):  
William Nassib William ◽  
Lei Feng ◽  
Merrill S. Kies ◽  
Salmaan Ahmed ◽  
George R. Blumenschein ◽  
...  

6017 Background: In a single-arm, phase 2 study, we previously demonstrated that in pts with R/M HNSCC, cisplatin, docetaxel and E improved progression-free survival (PFS) compared to historical data (Kim et al., ASCO 2006). Herein, we evaluated this regimen in a single center, randomized, phase 2 trial. Methods: Pts with R/M HNSCC, with a performance status (PS) 0-2, were randomized (1:1) to receive up to 6 cycles of first-line chemotherapy with cisplatin 75 mg/m2 (or carboplatin AUC 6) and docetaxel 75 mg/m2 i.v. on day 1 every 21 days, plus placebo (P) vs. E 150 mg p.o. daily, followed by maintenance P or E until disease progression. The primary endpoint was PFS. With 120 pts, the study had 80% power to detect an improvement in median PFS from 3.0 to 4.9 months with a two-sided type I error rate of 0.1. Results: From 05/2010 to 07/2015, 120 pts were randomized to the P (N = 60) or E (N = 60) groups. All pts but one initiated treatment and were eligible for evaluation of the primary endpoint – 92 males; median age 62 years; 52 oropharynx, 40 oral cavity, 19 larynx, 8 hypopharynx cancer pts; 86 current/former smokers; 43 with recurrence within 6 months of completion of local treatment; 27 with prior exposure to EGFR inhibitors. Median PFS was 4.4 vs. 6.1 months for the P and E groups, respectively (hazard ratio [HR] 0.63, 95% confidence interval [CI] 0.42-0.95 months, p = 0.026). Response rates were 44% vs. 56% for P vs. E (p = 0.21). Median overall survival (OS) for P- and E-treated pts was 13.7 vs. 17.0 months (HR = 0.67, 95% CI 0.43-1.04, p = 0.07). Benefits from E on PFS and OS were more pronounced in pts with oropharyngeal tumors (p≤0.05 for interaction). In the E group, first-cycle rash grade 2-4 (34% pts) was associated with longer OS (HR = 0.40, p = 0.02). E-treated pts experienced a higher incidence of grade 3-4 adverse events (33.9 vs. 53.3%), including diarrhea (3 vs.17%), dehydration (5 vs. 15%), nausea (5 vs. 14%), rash (0 vs. 12%). Conclusions: This study met its primary endpoint. Addition of E to first-line platinum/docetaxel improved PFS and OS. This regimen may warrant further evaluation in randomized, phase 3 trials. Clinical trial information: NCT01064479.


Sign in / Sign up

Export Citation Format

Share Document