Cancer immunotherapy trial design with delayed treatment effect

2019 ◽  
Vol 19 (3) ◽  
pp. 202-213
Author(s):  
Jianrong Wu ◽  
Jing Wei
2020 ◽  
pp. 096228022098078
Author(s):  
Bosheng Li ◽  
Liwen Su ◽  
Jun Gao ◽  
Liyun Jiang ◽  
Fangrong Yan

A delayed treatment effect is often observed in the confirmatory trials for immunotherapies and is reflected by a delayed separation of the survival curves of the immunotherapy groups versus the control groups. This phenomenon makes the design based on the log-rank test not applicable because this design would violate the proportional hazard assumption and cause loss of power. Thus, we propose a group sequential design allowing early termination on the basis of efficacy based on a more powerful piecewise weighted log-rank test for an immunotherapy trial with a delayed treatment effect. We present an approach on the group sequential monitoring, in which the information time is defined based on the number of events occurring after the delay time. Furthermore, we developed a one-dimensional search algorithm to determine the required maximum sample size for the proposed design, which uses an analytical estimation obtained by the inflation factor as an initial value and an empirical power function calculated by a simulation-based procedure as an objective function. In the simulation, we tested the unstable accuracy of the analytical estimation, the consistent accuracy of the maximum sample size determined by the search algorithm and the advantages of the proposed design on saving sample size.


Stroke ◽  
2016 ◽  
Vol 47 (suppl_1) ◽  
Author(s):  
Maarten Lansberg ◽  
Ninad Bhat ◽  
Joseph P Broderick ◽  
Yuko Y Palesch ◽  
Philip W Lavori ◽  
...  

Introduction: It is difficult to choose trial enrollment criteria that will yield a robust treatment effect. To address this problem, we developed a novel trial design that restricts enrollment criteria to the patient subgroup most likely to show benefit, if an interim analysis indicates futility in the overall sample. Future recruitment, and the population in which the primary hypothesis is tested, is limited to the selected subgroup. Hypothesis: A design with adaptive subgroup selection increases the power of endovascular stroke studies. Methods: We ran simulations to compare the power of the adaptive design with that of a traditional design. Trial parameters were: type I error 0.025, type II error 0.1, analysis after 450, 675 and 900 patients (interim and final analyses in IMS III). Outcome data were based on 90 day mRS scores observed in IMS III among patients with a vessel occlusion on baseline CTA (n=289). Subgroups were defined a priori according to vessel occlusion (ICA ± distal occlusion vs M1 vs M2-4), onset-to-randomization time (early vs late), and treatment allocation (IA+IV vs IV alone). The treatment effect in the overall cohort was a mean mRS improvement of 0.15 (2.41 for IV+IA vs 2.56 for IV alone; SD 1.45). The subgroup treatment effects were: early ICA = 0.54, late ICA = 0.60, early M1 = 0.33, late M1 = 0.07, early M2-4 = -0.66, and late M2-4 = -0.35. Results: The traditional design showed a treatment benefit in 31% of simulations. The adaptive design showed benefit in 91%, failed to show benefit after enrollment of the maximum sample in 1%, and stopped early for futility in 8% of simulations. The adaptive trial stopped early for benefit in 84% of simulations. Due to early stopping, the mean number of patients randomized is 590±140 with the adaptive design vs 900 with a traditional design. Of the adaptive trial simulations that showed benefit, 91% occur after subgroup selection. The subgroup selected most often (31% of all simulations) includes early and late ICA patients. Conclusions: A trial with adaptive subgroup selection can efficiently test the effect of endovascular stroke treatment. Simulations suggest that with this design, IMS III would have 91% power and would typically stop early after interim analysis shows benefit in a patient subgroup.


2019 ◽  
Vol 111 (11) ◽  
pp. 1186-1191 ◽  
Author(s):  
Julien Péron ◽  
Alexandre Lambert ◽  
Stephane Munier ◽  
Brice Ozenne ◽  
Joris Giai ◽  
...  

Abstract Background The treatment effect in survival analysis is commonly quantified as the hazard ratio, and tested statistically using the standard log-rank test. Modern anticancer immunotherapies are successful in a proportion of patients who remain alive even after a long-term follow-up. This new phenomenon induces a nonproportionality of the underlying hazards of death. Methods The properties of the net survival benefit were illustrated using the dataset from a trial evaluating ipilimumab in metastatic melanoma. The net survival benefit was then investigated through simulated datasets under typical scenarios of proportional hazards, delayed treatment effect, and cure rate. The net survival benefit test was computed according to the value of the minimal survival difference considered clinically relevant. As comparators, the standard and the weighted log-rank tests were also performed. Results In the illustrative dataset, the net survival benefit favored ipilimumab [Δ(0) = 15.8%, 95% confidence interval = 4.6% to 27.3%, P = .006]. This favorable effect was maintained when the analysis was focused on long-term survival differences (eg, >12 months, Δ(12) = 12.5% (95% confidence interval = 4.4% to 20.6%, P = .002). Under the scenarios of a delayed treatment effect and cure rate, the power of the net survival benefit test compared favorably to the standard log-rank test power and was comparable to the power of the weighted log-rank test for large values of the threshold of clinical relevance. Conclusion The net long-term survival benefit is a measure of treatment effect that is meaningful whether or not hazards are proportional. The associated statistical test is more powerful than the standard log-rank test when a delayed treatment effect is anticipated.


Sign in / Sign up

Export Citation Format

Share Document