scholarly journals The DURATIONS randomised trial design: Estimation targets, analysis methods and operating characteristics

2020 ◽  
Vol 17 (6) ◽  
pp. 644-653
Author(s):  
Matteo Quartagno ◽  
James R Carpenter ◽  
A Sarah Walker ◽  
Michelle Clements ◽  
Mahesh KB Parmar

Background: Designing trials to reduce treatment duration is important in several therapeutic areas, including tuberculosis and bacterial infections. We recently proposed a new randomised trial design to overcome some of the limitations of standard two-arm non-inferiority trials. This DURATIONS design involves randomising patients to a number of duration arms and modelling the so-called ‘duration-response curve’. This article investigates the operating characteristics (type-1 and type-2 errors) of different statistical methods of drawing inference from the estimated curve. Methods: Our first estimation target is the shortest duration non-inferior to the control (maximum) duration within a specific risk difference margin. We compare different methods of estimating this quantity, including using model confidence bands, the delta method and bootstrap. We then explore the generalisability of results to estimation targets which focus on absolute event rates, risk ratio and gradient of the curve. Results: We show through simulations that, in most scenarios and for most of the estimation targets, using the bootstrap to estimate variability around the target duration leads to good results for DURATIONS design-appropriate quantities analogous to power and type-1 error. Using model confidence bands is not recommended, while the delta method leads to inflated type-1 error in some scenarios, particularly when the optimal duration is very close to one of the randomised durations. Conclusions: Using the bootstrap to estimate the optimal duration in a DURATIONS design has good operating characteristics in a wide range of scenarios and can be used with confidence by researchers wishing to design a DURATIONS trial to reduce treatment duration. Uncertainty around several different targets can be estimated with this bootstrap approach.

2018 ◽  
Vol 15 (5) ◽  
pp. 477-488 ◽  
Author(s):  
Matteo Quartagno ◽  
A Sarah Walker ◽  
James R Carpenter ◽  
Patrick PJ Phillips ◽  
Mahesh KB Parmar

Background Trials to identify the minimal effective treatment duration are needed in different therapeutic areas, including bacterial infections, tuberculosis and hepatitis C. However, standard non-inferiority designs have several limitations, including arbitrariness of non-inferiority margins, choice of research arms and very large sample sizes. Methods We recast the problem of finding an appropriate non-inferior treatment duration in terms of modelling the entire duration–response curve within a pre-specified range. We propose a multi-arm randomised trial design, allocating patients to different treatment durations. We use fractional polynomials and spline-based methods to flexibly model the duration–response curve. We call this a ‘Durations design’. We compare different methods in terms of a scaled version of the area between true and estimated prediction curves. We evaluate sensitivity to key design parameters, including sample size, number and position of arms. Results A total sample size of ~ 500 patients divided into a moderate number of equidistant arms (5–7) is sufficient to estimate the duration–response curve within a 5% error margin in 95% of the simulations. Fractional polynomials provide similar or better results than spline-based methods in most scenarios. Conclusion Our proposed practical randomised trial ‘Durations design’ shows promising performance in the estimation of the duration–response curve; subject to a pending careful investigation of its inferential properties, it provides a potential alternative to standard non-inferiority designs, avoiding many of their limitations, and yet being fairly robust to different possible duration–response curves. The trial outcome is the whole duration–response curve, which may be used by clinicians and policymakers to make informed decisions, facilitating a move away from a forced binary hypothesis testing paradigm.


2021 ◽  
Vol 16 (1) ◽  
Author(s):  
Eve Fouarge ◽  
◽  
Arnaud Monseur ◽  
Bruno Boulanger ◽  
Mélanie Annoussamy ◽  
...  

Abstract Background Centronuclear myopathies are severe rare congenital diseases. The clinical variability and genetic heterogeneity of these myopathies result in major challenges in clinical trial design. Alternative strategies to large placebo-controlled trials that have been used in other rare diseases (e.g., the use of surrogate markers or of historical controls) have limitations that Bayesian statistics may address. Here we present a Bayesian model that uses each patient’s own natural history study data to predict progression in the absence of treatment. This prospective multicentre natural history evaluated 4-year follow-up data from 59 patients carrying mutations in the MTM1 or DNM2 genes. Methods Our approach focused on evaluation of forced expiratory volume in 1 s (FEV1) in 6- to 18-year-old children. A patient was defined as a responder if an improvement was observed after treatment and the predictive probability of such improvement in absence of intervention was less than 0.01. An FEV1 response was considered clinically relevant if it corresponded to an increase of more than 8%. Results The key endpoint of a clinical trial using this model is the rate of response. The power of the study is based on the posterior probability that the rate of response observed is greater than the rate of response that would be observed in the absence of treatment predicted based on the individual patient’s previous natural history. In order to appropriately control for Type 1 error, the threshold probability by which the difference in response rates exceeds zero was adapted to 91%, ensuring a 5% overall Type 1 error rate for the trial. Conclusions Bayesian statistical analysis of natural history data allowed us to reliably simulate the evolution of symptoms for individual patients over time and to probabilistically compare these simulated trajectories to actual observed post-treatment outcomes. The proposed model adequately predicted the natural evolution of patients over the duration of the study and will facilitate a sufficiently powerful trial design that can cope with the disease’s rarity. Further research and ongoing dialog with regulatory authorities are needed to allow for more applications of Bayesian statistics in orphan disease research.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Lior Rennert ◽  
Moonseong Heo ◽  
Alain H. Litwin ◽  
Victor De Gruttola

Abstract Background Beginning in 2019, stepped-wedge designs (SWDs) were being used in the investigation of interventions to reduce opioid-related deaths in communities across the United States. However, these interventions are competing with external factors such as newly initiated public policies limiting opioid prescriptions, media awareness campaigns, and the COVID-19 pandemic. Furthermore, control communities may prematurely adopt components of the intervention as they become available. The presence of time-varying external factors that impact study outcomes is a well-known limitation of SWDs; common approaches to adjusting for them make use of a mixed effects modeling framework. However, these models have several shortcomings when external factors differentially impact intervention and control clusters. Methods We discuss limitations of commonly used mixed effects models in the context of proposed SWDs to investigate interventions intended to reduce opioid-related mortality, and propose extensions of these models to address these limitations. We conduct an extensive simulation study of anticipated data from SWD trials targeting the current opioid epidemic in order to examine the performance of these models in the presence of external factors. We consider confounding by time, premature adoption of intervention components, and time-varying effect modification— in which external factors differentially impact intervention and control clusters. Results In the presence of confounding by time, commonly used mixed effects models yield unbiased intervention effect estimates, but can have inflated Type 1 error and result in under coverage of confidence intervals. These models yield biased intervention effect estimates when premature intervention adoption or effect modification are present. In such scenarios, models incorporating fixed intervention-by-time interactions with an unstructured covariance for intervention-by-cluster-by-time random effects result in unbiased intervention effect estimates, reach nominal confidence interval coverage, and preserve Type 1 error. Conclusions Mixed effects models can adjust for different combinations of external factors through correct specification of fixed and random time effects. Since model choice has considerable impact on validity of results and study power, careful consideration must be given to how these external factors impact study endpoints and what estimands are most appropriate in the presence of such factors.


2014 ◽  
Vol 56 (4) ◽  
pp. 614-630 ◽  
Author(s):  
Alexandra C. Graf ◽  
Peter Bauer ◽  
Ekkehard Glimm ◽  
Franz Koenig

2017 ◽  
Vol 28 (7) ◽  
pp. 2015-2031 ◽  
Author(s):  
Hao Liu ◽  
Xiao Lin ◽  
Xuelin Huang

In oncology clinical trials, both short-term response and long-term survival are important. We propose an urn-based adaptive randomization design to incorporate both of these two outcomes. While short-term response can update the randomization probability quickly to benefit the trial participants, long-term survival outcome can also change the randomization to favor the treatment arm with definitive therapeutic benefit. Using generalized Friedman’s urn, we derive an explicit formula for the limiting distribution of the number of subjects assigned to each arm. With prior or hypothetical knowledge on treatment effects, this formula can be used to guide the selection of parameters for the proposed design to achieve desirable patient number ratios between different treatment arms, and thus optimize the operating characteristics of the trial design. Simulation studies show that the proposed design successfully assign more patients to the treatment arms with either better short-term tumor response or long-term survival outcome or both.


2021 ◽  
Vol 18 (5) ◽  
pp. 521-528
Author(s):  
Eric S Leifer ◽  
James F Troendle ◽  
Alexis Kolecki ◽  
Dean A Follmann

Background/aims: The two-by-two factorial design randomizes participants to receive treatment A alone, treatment B alone, both treatments A and B( AB), or neither treatment ( C). When the combined effect of A and B is less than the sum of the A and B effects, called a subadditive interaction, there can be low power to detect the A effect using an overall test, that is, factorial analysis, which compares the A and AB groups to the C and B groups. Such an interaction may have occurred in the Action to Control Cardiovascular Risk in Diabetes blood pressure trial (ACCORD BP) which simultaneously randomized participants to receive intensive or standard blood pressure, control and intensive or standard glycemic control. For the primary outcome of major cardiovascular event, the overall test for efficacy of intensive blood pressure control was nonsignificant. In such an instance, simple effect tests of A versus C and B versus C may be useful since they are not affected by a subadditive interaction, but they can have lower power since they use half the participants of the overall trial. We investigate multiple testing procedures which exploit the overall tests’ sample size advantage and the simple tests’ robustness to a potential interaction. Methods: In the time-to-event setting, we use the stratified and ordinary logrank statistics’ asymptotic means to calculate the power of the overall and simple tests under various scenarios. We consider the A and B research questions to be unrelated and allocate 0.05 significance level to each. For each question, we investigate three multiple testing procedures which allocate the type 1 error in different proportions for the overall and simple effects as well as the AB effect. The Equal Allocation 3 procedure allocates equal type 1 error to each of the three effects, the Proportional Allocation 2 procedure allocates 2/3 of the type 1 error to the overall A (respectively, B) effect and the remaining type 1 error to the AB effect, and the Equal Allocation 2 procedure allocates equal amounts to the simple A (respectively, B) and AB effects. These procedures are applied to ACCORD BP. Results: Across various scenarios, Equal Allocation 3 had robust power for detecting a true effect. For ACCORD BP, all three procedures would have detected a benefit of intensive glycemia control. Conclusions: When there is no interaction, Equal Allocation 3 has less power than a factorial analysis. However, Equal Allocation 3 often has greater power when there is an interaction. The R package factorial2x2 can be used to explore the power gain or loss for different scenarios.


2020 ◽  
Author(s):  
◽  
Hao Cheng

[ACCESS RESTRICTED TO THE UNIVERSITY OF MISSOURI AT REQUEST OF AUTHOR.] Universities commercialize their discoveries at an increasing pace in order to maximize their economic impact and generate additional funding for research. They form technology transfer offices (TTOs) to evaluate the commercial value of university inventions and choose the most promising ones to patent and commercialize. Uncertainties and asymmetric information in project selection make the TTO choices difficult and can cause both type 1 error (forgo valuable discoveries) and type 2 error (select low-value discoveries). In this dissertation, I examine the TTO's project selection process and the factors that influence the choice of academic inventions for patenting and commercialization, the type 1 error committed, and the final licensing outcome. The dissertation contains three essays. In the first essay, I analyze project selection under uncertainty when both the quality of the proposed project and the motives of the applicant are uncertain. Some inventors may have an incentive to disguise the true quality and commercial value of their discoveries in order to conform to organizational expectations of disclosure while retaining rights to potentially pursue commercialization of their discoveries outside the organization's boundaries for their own benefit. Inventors may equally, ex post, lose interest to the commercialization of their invention due to competing job demands. I develop a model to examine the decision process of a university TTO responsible for the commercialization of academic inventions under such circumstances. The model describes the conditions that prompt Type 1 and Type 2 errors and allows for inferences for minimizing each. Little is known about the factors that make project selection effective or the opposite and there has been limited empirical analysis in this area. The few empirical studies that are available, examine the sources of type 2 error but there is no empirical work that analyzes type 1 error and the contributing factors. Research on type 1 error encounters two main difficulties. First, it is difficult to ascertain the decision process and second, it is challenging to approximate the counterfactual. Using data from the TTO of the University of Missouri, in the second essay I study the factors that influence the project selection process of the TTO in and the ex post type 1 error realized. In most cases, universities pursue commercialization of their inventions through licensing. There have been a few empirical studies that have researched the factors that affect licensing and their relative importance. In the third essay, I examine the characteristics of university inventions that are licensed using almost 10 years of data on several hundred of inventions, their characteristics, and the licensing status.


2018 ◽  
Vol 48 (3) ◽  
pp. 691-701 ◽  
Author(s):  
Apostolos Gkatzionis ◽  
Stephen Burgess

Abstract Background Selection bias affects Mendelian randomization investigations when selection into the study sample depends on a collider between the genetic variant and confounders of the risk factor–outcome association. However, the relative importance of selection bias for Mendelian randomization compared with other potential biases is unclear. Methods We performed an extensive simulation study to assess the impact of selection bias on a typical Mendelian randomization investigation. We considered inverse probability weighting as a potential method for reducing selection bias. Finally, we investigated whether selection bias may explain a recently reported finding that lipoprotein(a) is not a causal risk factor for cardiovascular mortality in individuals with previous coronary heart disease. Results Selection bias had a severe impact on bias and Type 1 error rates in our simulation study, but only when selection effects were large. For moderate effects of the risk factor on selection, bias was generally small and Type 1 error rate inflation was not considerable. Inverse probability weighting ameliorated bias when the selection model was correctly specified, but increased bias when selection bias was moderate and the model was misspecified. In the example of lipoprotein(a), strong genetic associations and strong confounder effects on selection mean the reported null effect on cardiovascular mortality could plausibly be explained by selection bias. Conclusions Selection bias can adversely affect Mendelian randomization investigations, but its impact is likely to be less than other biases. Selection bias is substantial when the effects of the risk factor and confounders on selection are particularly large.


Sign in / Sign up

Export Citation Format

Share Document