group sequential
Recently Published Documents


TOTAL DOCUMENTS

598
(FIVE YEARS 88)

H-INDEX

48
(FIVE YEARS 4)

2021 ◽  
Author(s):  
Pranab Ghosh ◽  
Robin Ristl ◽  
Franz König ◽  
Martin Posch ◽  
Christopher Jennison ◽  
...  

Stats ◽  
2021 ◽  
Vol 4 (4) ◽  
pp. 1080-1090
Author(s):  
Oke Gerke ◽  
Sören Möller

Bland–Altman agreement analysis has gained widespread application across disciplines, last but not least in health sciences, since its inception in the 1980s. Bayesian analysis has been on the rise due to increased computational power over time, and Alari, Kim, and Wand have put Bland–Altman Limits of Agreement in a Bayesian framework (Meas.Phys.Educ.Exerc.Sci.2021,25,137–148). We contrasted the prediction of a single future observation and the estimation of the Limits of Agreement from the frequentist and a Bayesian perspective by analyzing interrater data of two sequentially conducted, preclinical studies. The estimation of the Limits of Agreement θ1 and θ2 has wider applicability than the prediction of single future differences. While a frequentist confidence interval represents a range of nonrejectable values for null hypothesis significance testing of H0: θ1 ≤ -δ or θ2 ≥ δ against H1: θ1 > -δ and θ2 < δ, with a predefined benchmark value δ, Bayesian analysis allows for direct interpretation of both the posterior probability of the alternative hypothesis and the likelihood of parameter values. We discuss group-sequential testing and nonparametric alternatives briefly. Frequentist simplicity does not beat Bayesian interpretability due to improved computational resources, but the elicitation and implementation of prior information demand caution. Accounting for clustered data (e.g., repeated measurements per subject) is well-established in frequentist, but not yet in Bayesian Bland–Altman analysis.


2021 ◽  
Author(s):  
Daniel Lakens

Psychological science would become more efficient if researchers implemented sequential designs where feasible. Miller and Ulrich (2020) propose an independent segments procedure where data can be analyzed at a prespecified number of equally spaced looks while controlling the Type 1 error rate. Such procedures already exist in the sequential analysis literature, and in this commentary I reflect on whether psychologist should choose to adopt these existing procedure instead. I believe limitations in the independent segments procedure make it relatively unattractive. Being forced to stop for futility based on a bound not chosen to control Type 2 errors, or reject a smallest effect size of interest in an equivalence test, limit the inferences one can make. Having to use a prespecified number of equally spaced looks is logistically inconvenient. And not having the flexibility to choose α and β spending functions limit the possibility to design efficient studies based on the goal and limitations of the researcher. Recent software packages such as rpact (Wassmer &amp; Pahlke, 2019) make sequential designs equally easy to perform as the independent segments procedure. While learning new statistical methods always takes time, I believe psychological scientists should start on a path that will not limit them in the flexibility and inferences their statistical procedure provides.


2021 ◽  
pp. 0272989X2110450
Author(s):  
Laura Flight ◽  
Steven Julious ◽  
Alan Brennan ◽  
Susan Todd

Introduction Adaptive designs allow changes to an ongoing trial based on prespecified early examinations of accrued data. Opportunities are potentially being missed to incorporate health economic considerations into the design of these studies. Methods We describe how to estimate the expected value of sample information for group sequential design adaptive trials. We operationalize this approach in a hypothetical case study using data from a pilot trial. We report the expected value of sample information and expected net benefit of sampling results for 5 design options for the future full-scale trial including the fixed-sample-size design and the group sequential design using either the Pocock stopping rule or the O’Brien-Fleming stopping rule with 2 or 5 analyses. We considered 2 scenarios relating to 1) using the cost-effectiveness model with a traditional approach to the health economic analysis and 2) adjusting the cost-effectiveness analysis to incorporate the bias-adjusted maximum likelihood estimates of trial outcomes to account for the bias that can be generated in adaptive trials. Results The case study demonstrated that the methods developed could be successfully applied in practice. The results showed that the O’Brien-Fleming stopping rule with 2 analyses was the most efficient design with the highest expected net benefit of sampling in the case study. Conclusions Cost-effectiveness considerations are unavoidable in budget-constrained, publicly funded health care systems, and adaptive designs can provide an alternative to costly fixed-sample-size designs. We recommend that when planning a clinical trial, expected value of sample information methods be used to compare possible adaptive and nonadaptive trial designs, with appropriate adjustment, to help justify the choice of design characteristics and ensure the cost-effective use of research funding. Highlights Opportunities are potentially being missed to incorporate health economic considerations into the design of adaptive clinical trials. Existing expected value of sample information analysis methods can be extended to compare possible group sequential and nonadaptive trial designs when planning a clinical trial. We recommend that adjusted analyses be presented to control for the potential impact of the adaptive designs and to maintain the accuracy of the calculations. This approach can help to justify the choice of design characteristics and ensure the cost-effective use of limited research funding.


2021 ◽  
Author(s):  
Ales Kotalik ◽  
David M. Vock ◽  
Brian P. Hobbs ◽  
Joseph S. Koopmeiners

2021 ◽  
pp. 235-268
Author(s):  
Ekkehard Glimm ◽  
Lisa V. Hampson

2021 ◽  
pp. 096228022110432
Author(s):  
Jannik Feld ◽  
Andreas Faldum ◽  
Rene Schmidt

Whereas the theory of confirmatory adaptive designs is well understood for uncensored data, implementation of adaptive designs in the context of survival trials remains challenging. Commonly used adaptive survival tests are based on the independent increments structure of the log-rank statistic. This implies some relevant limitations: On the one hand, essentially only the interim log-rank statistic may be used for design modifications (such as data-dependent sample size recalculation). Furthermore, the treatment arm allocation ratio in these classical methods is assumed to be constant throughout the trial period. Here, we propose an extension of the independent increments approach to adaptive survival tests that addresses some of these limitations. We present a confirmatory adaptive two-sample log-rank test that allows rejection regions and sample size recalculation rules to be based not only on the interim log-rank statistic, but also on point-wise survival rate estimates, simultaneously. In addition, the possibility is opened to adapt the treatment arm allocation ratio after each interim analysis in a data-dependent way. The ability to include point-wise survival rate estimators in the rejection region of a test for comparing survival curves might be attractive, e.g., for seamless phase II/III designs. Data-dependent adaptation of the allocation ratio could be helpful in multi-arm trials in order to successively steer recruitment into the study arms with the greatest chances of success. The methodology is motivated by the LOGGIC Europe Trial from pediatric oncology. Distributional properties are derived using martingale techniques in the large sample limit. Small sample properties are studied by simulation.


Sign in / Sign up

Export Citation Format

Share Document