superiority trials
Recently Published Documents


TOTAL DOCUMENTS

47
(FIVE YEARS 10)

H-INDEX

8
(FIVE YEARS 1)

2021 ◽  
pp. 096228022098857
Author(s):  
Yongqiang Tang

Log-rank tests have been widely used to compare two survival curves in biomedical research. We describe a unified approach to power and sample size calculation for the unweighted and weighted log-rank tests in superiority, noninferiority and equivalence trials. It is suitable for both time-driven and event-driven trials. A numerical algorithm is suggested. It allows flexible specification of the patient accrual distribution, baseline hazards, and proportional or nonproportional hazards patterns, and enables efficient sample size calculation when there are a range of choices for the patient accrual pattern and trial duration. A confidence interval method is proposed for the trial duration of an event-driven trial. We point out potential issues with several popular sample size formulae. Under proportional hazards, the power of a survival trial is commonly believed to be determined by the number of observed events. The belief is roughly valid for noninferiority and equivalence trials with similar survival and censoring distributions between two groups, and for superiority trials with balanced group sizes. In unbalanced superiority trials, the power depends also on other factors such as data maturity. Surprisingly, the log-rank test usually yields slightly higher power than the Wald test from the Cox model under proportional hazards in simulations. We consider various nonproportional hazards patterns induced by delayed effects, cure fractions, and/or treatment switching. Explicit power formulae are derived for the combination test that takes the maximum of two or more weighted log-rank tests to handle uncertain nonproportional hazards patterns. Numerical examples are presented for illustration.


2020 ◽  
Vol 17 (5) ◽  
pp. 552-559
Author(s):  
Nicolas A Bamat ◽  
Osayame A Ekhaguere ◽  
Lingqiao Zhang ◽  
Dustin D Flannery ◽  
Sara C Handley ◽  
...  

Background/aims: Noninferiority clinical trials are susceptible to false confirmation of noninferiority when the intention-to-treat principle is applied in the setting of incomplete trial protocol adherence. The risk increases as protocol adherence rates decrease. The objective of this study was to compare protocol adherence and hypothesis confirmation between superiority and noninferiority randomized clinical trials published in three high impact medical journals. We hypothesized that noninferiority trials have lower protocol adherence and greater hypothesis confirmation. Methods: We conducted an observational study using published clinical trial data. We searched PubMed for active control, two-arm parallel group randomized clinical trials published in JAMA: The Journal of the American Medical Association, The New England Journal of Medicine, and The Lancet between 2007 and 2017. The primary exposure was trial type, superiority versus noninferiority, as determined by the hypothesis testing framework of the primary trial outcome. The primary outcome was trial protocol adherence rate, defined as the number of randomized subjects receiving the allocated intervention as described by the trial protocol and followed to primary outcome ascertainment (numerator), over the total number of subjects randomized (denominator). Hypothesis confirmation was defined as affirmation of noninferiority or the alternative hypothesis for noninferiority and superiority trials, respectively. Results: Among 120 superiority and 120 noninferiority trials, median and interquartile protocol adherence rates were 91.5 [81.4–96.7] and 89.8 [83.6–95.2], respectively; P = 0.47. Hypothesis confirmation was observed in 107/120 (89.2%) of noninferiority and 64/120 (53.3%) of superiority trials, risk difference (95% confidence interval): 35.8 (25.3–46.3), P < 0.001. Conclusion: Protocol adherence rates are similar between superiority and noninferiority trials published in three high impact medical journals. Despite this, we observed greater hypothesis confirmation among noninferiority trials. We speculate that publication bias, lenient noninferiority margins and other sources of bias may contribute to this finding. Further study is needed to identify the reasons for this observed difference.


2020 ◽  
pp. postgradmedj-2019-136569
Author(s):  
Nanda Gamad ◽  
Nusrat Shafiq ◽  
Samir Malhotra

ObjectiveTo show that overpowered trials claim statistical significance detouring clinical relevance and warrant the need of superiority margin to avoid such misinterpretation.DesignSelective review of articles published in the New England Journal of Medicine between 1 January 2015 and 31 December 2018 and meta-analysis following Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist.Eligibility criteria for selecting studies and methodsPublished superiority trials evaluating cardiovascular diseases and diabetes mellitus with positive efficacy outcome were eligible. Fixed effects meta-analysis was performed using RevMan V.5.3 to calculate overall effect estimate, pooled HR and it was compared with mean clinically significant difference.ResultsThirteen eligible trials with 164 721 participants provided the quantitative data for this review. Largely, the primary efficacy endpoint in these trials was the composite of cardiovascular death, non-fatal myocardial infarction, unstable angina requiring rehospitalisation, coronary revascularisation and fatal or non-fatal stroke. The pooled HR was 0.86 (95% CI 0.84 to 0.89, I2=45%) which was lower than the mean clinically significant difference of 0.196 (19.6%, range: 0.09375–0.35) of these studies. There was a wide 95% CI in these studies from 0.56 to 0.99. The upper margin of CI in most of the studies was close to the line of no difference. Absolute risk reduction was small (1.19% to 2.3%) translating to a high median number needed to treat of 63 (range: 43 to 84) over a follow-up duration of 2.95 years.ConclusionsThe results of this meta-analysis indicate that overpowered trials give statistically significant results undermining clinical relevance. To avoid such misuse of current statistical tools, there is a need to derive superiority margin. We hope to generate debate on considering clinically significant difference, used to calculate sample size, as superiority margin.


2019 ◽  
Vol 25 (4) ◽  
pp. 143-144
Author(s):  
Kevin Riggs ◽  
Joshua Richman ◽  
Stefan Kertesz

High-quality research demonstrating a lack of effectiveness may facilitate the ‘de-adoption’ of ineffective health services. However, there has been little debate on the optimal design for ineffectiveness research—studies exploring the research hypothesis that an intervention is ineffective. The aim of this study was to explore investigators’ preferences for trial design for ineffectiveness research. We conducted a mixed-methods online survey with principle investigators identified from clinicaltrials.gov. A vignette described researchers planning a trial to test a widely used intervention they hypothesised was ineffective. One multiple-choice question asked whether a superiority trial or equivalence trial design was favoured, and one free-response question asked about the reasons for that choice. Free-response answers were analysed using content analysis to identify related reasons. 139 participants completed the survey (completion rate 37.5%). Overall, 56.8% favoured superiority trials, 27.3% favoured equivalence trials and 15.8% were unsure. Reasons identified for favouring superiority trials were: (1) evidence of superiority should be required to justify active treatment, (2) superiority trials are more familiar, (3) placebo should not be the comparator in equivalence trials and (4) superiority trials require smaller sample sizes. Reasons identified for favouring equivalence trials were: (1) negative superiority trials represent a lack of evidence of effectiveness, not evidence of ineffectiveness and (2) the research hypothesis should not be the same as the null hypothesis. A minority of experienced researchers favour equivalence trials for ineffectiveness research, and misconceptions and lack of familiarity with equivalence trials may be contributing factors.


Heart ◽  
2019 ◽  
Vol 106 (2) ◽  
pp. 99-104 ◽  
Author(s):  
James T Leung ◽  
Stephanie L Barnes ◽  
Sidney T Lo ◽  
Dominic Y Leung

Clinical trials traditionally aim to show a new treatment is superior to placebo or standard treatment, that is, superiority trials. There is an increasing number of trials demonstrating a new treatment is non-inferior to standard treatment. The hypotheses, design and interpretation of non-inferiority trials are different to superiority trials. Non-inferiority trials are designed with the notion that the new treatment offers advantages over standard treatment in certain important aspects. The non-inferior margin is a predetermined margin of difference between the new and standard treatment that is considered acceptable or tolerable for the new treatment to be considered ‘similar’ or ‘not worse’. Both relative difference and absolute difference methods can be used to define the non-inferior margin. Sequential testing for non-inferiority and superiority is often performed. Non-inferiority trials may be necessary in situations where it is no longer ethical to test any new treatment against placebo. There are inherent assumptions in non-inferiority trials which may not be correct and which are not being tested. Successive non-inferiority trials may introduce less and less effective treatments even though these treatments may have been shown to be non-inferior. Furthermore, poor quality trials favour non-inferior results. Intention-to-treat analysis, the preferred way to analyse randomised trials, may favour non-inferiority. Both intention-to-treat and per-protocol analyses should be recommended in non-inferiority trials. Clinicians should be aware of the pitfalls of non-inferiority trials and not accept non-inferiority on face value. The focus should not be on the p values but on the effect size and confidence limits.


2019 ◽  
Vol 9 (4-s) ◽  
pp. 829-831
Author(s):  
Rada Santosh Kumar ◽  
G.V.R.L. Soujanya

The concept of therapeutic equivalence is becoming increasingly important in today’s cost – conscious environment. Though an effective therapy already exists, but clinically equivalent therapy also becoming important. An improved toxicity profile better effects on symptoms and ease of administration may be important considerations. In these positive controls substantial effect is required to define equivalence. The goal is to prove that the new treatment is not inferior to standard, since providing that two treatments are equal is not possible. The superiority trials demonstrate the better efficacy of the treatment against the concurrent placebo control.  Innovative drugs become available for the treatment of number of diseases. These, new products may offer some specific advantages over the standard drugs. The Placebo controlled trials are invariably unethical, when known effective therapy is available for the condition being studies. The active controlled trials are used extensively in the development of new pharmaceuticals. The equivalence limit is defined by a lower equivalence or upper equivalence limit. These, principles are proposed for setting such limits, depending on the objective of the study placebo conditions and methods based on statistical properties.


2019 ◽  
Vol 10 (3) ◽  
pp. 332-345
Author(s):  
S. Raymond Golish ◽  
Michael W. Groff ◽  
Ali Araghi ◽  
Jason A. Inzana

Study Design: Systematic review. Objectives: Superiority claims for medical devices are commonly derived from noninferiority trials, but interpretation of such claims can be challenging. This study aimed to ( a) establish the prevalence of noninferiority and superiority designs among spinal device trials, ( b) assess the frequency of post hoc superiority claims from noninferiority studies, and ( c) critically evaluate the risk of bias in claims that could translate to misleading conclusions. Methods: Study bias was assessed using the Cochrane Risk of Bias Tool. The risk of bias for the superiority claim was established based on post hoc hypothesis specification, analysis of the intention-to-treat population, post hoc modification of a priori primary outcomes, and sensitivity analyses. Results: Forty-one studies were identified from 1895 records. Nineteen (46%) were noninferiority trials. Fifteen more (37%) were noninferiority trials with a secondary superiority hypothesis specified a priori. Seven (17%) were superiority trials. Of the 34 noninferiority trials, 14 (41%) made superiority claims. A medium or high risk of bias was related to the superiority claim in 9 of those trials (64%), which was due to the analyzed population, lacking sensitivity analyses, claims not being robust during sensitivity analyses, post hoc hypotheses, or modified endpoints. Only 4 of the 14 (29%) noninferiority studies provided low bias in the superiority claim, compared with 3 of the 5 (60%) superiority trials. Conclusions: Health care decision makers should carefully evaluate the risk of bias in each superiority claim and weigh their conclusions appropriately.


2019 ◽  
pp. 1-13
Author(s):  
Xiaoyu Cai ◽  
Yi Tsong ◽  
Meiyu Shen

Adaptive sample size re-estimation (SSR) methods have been widely used for designing clinical trials, especially during the past two decades. We give a critical review for several commonly used two-stage adaptive SSR designs for superiority trials with continuous endpoints. The objective, design and some of our suggestions and concerns of each design will be discussed in this paper. Keywords: Adaptive Design; Sample Size Re-estimation; Review Introduction Sample size determination is a key part of designing clinical trials. The objective of a good clinical trial design is to achieve the balance between efficiently spending resources and enrolling enough patients to achieve a desired power. At the designing stage of a clinical trial, there usually only have limited information available about the population, so that the sample size calculated at this stage may not be sufficient to address the study objective. Assumed that the data from two parallel treatment groups (e.g. treatment and control groups) are normally distributed with mean treatment effect μ_1 and μ_2, and equal within-group variance 𝜎2. Let the mean difference (treatment effect) . The efficacy of the treatment will be evaluated by testing the hypothesis.


Sign in / Sign up

Export Citation Format

Share Document