scholarly journals Correction to: Sample Size Re-estimation with the Com-Nougue Method to Evaluate Treatment Effect

Author(s):  
Jin Wang
Keyword(s):  
Biometrika ◽  
2020 ◽  
Author(s):  
Oliver Dukes ◽  
Stijn Vansteelandt

Summary Eliminating the effect of confounding in observational studies typically involves fitting a model for an outcome adjusted for covariates. When, as often, these covariates are high-dimensional, this necessitates the use of sparse estimators, such as the lasso, or other regularization approaches. Naïve use of such estimators yields confidence intervals for the conditional treatment effect parameter that are not uniformly valid. Moreover, as the number of covariates grows with the sample size, correctly specifying a model for the outcome is nontrivial. In this article we deal with both of these concerns simultaneously, obtaining confidence intervals for conditional treatment effects that are uniformly valid, regardless of whether the outcome model is correct. This is done by incorporating an additional model for the treatment selection mechanism. When both models are correctly specified, we can weaken the standard conditions on model sparsity. Our procedure extends to multivariate treatment effect parameters and complex longitudinal settings.


2017 ◽  
Vol 23 (5) ◽  
pp. 644-646 ◽  
Author(s):  
Maria Pia Sormani

The calculation of the sample size needed for a clinical study is the challenge most frequently put to statisticians, and it is one of the most relevant issues in the study design. The correct size of the study sample optimizes the number of patients needed to get the result, that is, to detect the minimum treatment effect that is clinically relevant. Minimizing the sample size of a study has the advantage of reducing costs, enhancing feasibility, and also has ethical implications. In this brief report, I will explore the main concepts on which the sample size calculation is based.


2018 ◽  
Vol 53 (7) ◽  
pp. 716-719
Author(s):  
Monica R. Lininger ◽  
Bryan L. Riemann

Objective: To describe the concept of statistical power as related to comparative interventions and how various factors, including sample size, affect statistical power.Background: Having a sufficiently sized sample for a study is necessary for an investigation to demonstrate that an effective treatment is statistically superior. Many researchers fail to conduct and report a priori sample-size estimates, which then makes it difficult to interpret nonsignificant results and causes the clinician to question the planning of the research design.Description: Statistical power is the probability of statistically detecting a treatment effect when one truly exists. The α level, a measure of differences between groups, the variability of the data, and the sample size all affect statistical power.Recommendations: Authors should conduct and provide the results of a priori sample-size estimations in the literature. This will assist clinicians in determining whether the lack of a statistically significant treatment effect is due to an underpowered study or to a treatment's actually having no effect.


2020 ◽  
pp. 096228022098078
Author(s):  
Bosheng Li ◽  
Liwen Su ◽  
Jun Gao ◽  
Liyun Jiang ◽  
Fangrong Yan

A delayed treatment effect is often observed in the confirmatory trials for immunotherapies and is reflected by a delayed separation of the survival curves of the immunotherapy groups versus the control groups. This phenomenon makes the design based on the log-rank test not applicable because this design would violate the proportional hazard assumption and cause loss of power. Thus, we propose a group sequential design allowing early termination on the basis of efficacy based on a more powerful piecewise weighted log-rank test for an immunotherapy trial with a delayed treatment effect. We present an approach on the group sequential monitoring, in which the information time is defined based on the number of events occurring after the delay time. Furthermore, we developed a one-dimensional search algorithm to determine the required maximum sample size for the proposed design, which uses an analytical estimation obtained by the inflation factor as an initial value and an empirical power function calculated by a simulation-based procedure as an objective function. In the simulation, we tested the unstable accuracy of the analytical estimation, the consistent accuracy of the maximum sample size determined by the search algorithm and the advantages of the proposed design on saving sample size.


Stroke ◽  
2014 ◽  
Vol 45 (suppl_1) ◽  
Author(s):  
Maarten G Lansberg ◽  
Robin Lemmens ◽  
Soren Christensen ◽  
Nishant K Mishra ◽  
Gregory W Albers

Background: Recent trials have shown no benefit of endovascular therapy. This may, in part, be explained by inaccurate estimates of the treatment effect used in the sample size calculations of these trials. A predictive model which includes variables that modify the expected treatment effect might yield more accurate estimates, and could be valuable in the design of future acute stroke trials. Methods: We conducted a literature review to obtain estimates of parameters that are associated with good functional outcome (GFO) following recanalization. We developed a model to estimate the treatment effect in endovascular stroke trials and applied this model to two recently published endovascular stroke trials. Results: We estimated a 40% absolute difference in the proportion of GFO (mRS 0-2 at 90 days) associated with reperfusion in patients with ICA or M1 occlusions who have a substantial ischemic penumbra at baseline. To estimate the effect size in trials, this value was multiplied by: 1) the proportion of patients undergoing endovascular therapy in the active treatment arm; 2) the proportion of patients with occlusions of the ICA or MCA-M1; 3) the proportion of patients with a substantial penumbra and a DWI lesion <50mL; and 4) the absolute difference in the proportion of patients with reperfusion, defined as TICI 2B-3, between the endovascular treatment and control arms. Based on literature review we assumed a reperfusion rate of 20% in the control arms of IMS III and MR Rescue, a 50% prevalence of patients with substantial penumbra and DWI lesions<50 mL in IMS III, and a 75% prevalence in the penumbral arms of MR Rescue. Based on these model inputs, a 2.2% increase in GFO with endovascular therapy was expected in IMS III, which closely matches the observed 2.1% increase. For MR Rescue, the model predicted a 1.5% increase in GFO with endovascular therapy. Considering the small sample size, this equates to 0.5 additional patients with GFO which closely matches the observed result of 3 fewer patients with GFO. Conclusion: A simple model shows promise for estimating the treatment effect of endovascular stroke trials. It may be useful for the design of future trials and could lead to different inclusion criteria or larger sample sizes compared to the recently conducted studies.


2020 ◽  
Vol 5 (2) ◽  
pp. 174-183 ◽  
Author(s):  
Peter J Godolphin ◽  
Philip M Bath ◽  
Christopher Partlett ◽  
Eivind Berge ◽  
Martin M Brown ◽  
...  

Introduction Adjudication of the primary outcome in randomised trials is thought to control misclassification. We investigated the amount of misclassification needed before adjudication changed the primary trial results. Patients (or materials) and methods: We included data from five randomised stroke trials. Differential misclassification was introduced for each primary outcome until the estimated treatment effect was altered. This was simulated 1000 times. We calculated the between-simulation mean proportion of participants that needed to be differentially misclassified to alter the treatment effect. In addition, we simulated hypothetical trials with a binary outcome and varying sample size (1000–10,000), overall event rate (10%–50%) and treatment effect (0.67–0.90). We introduced non-differential misclassification until the treatment effect was non-significant at 5% level. Results For the five trials, the range of unweighted kappa values were reduced from 0.89–0.97 to 0.65–0.85 before the treatment effect was altered. This corresponded to 2.1%–6% of participants misclassified differentially for trials with a binary outcome. For the hypothetical trials, those with a larger sample size, stronger treatment effect and overall event rate closer to 50% needed a higher proportion of events non-differentially misclassified before the treatment effect became non-significant. Discussion: We found that only a small amount of differential misclassification was required before adjudication altered the primary trial results, whereas a considerable proportion of participants needed to be misclassified non-differentially before adjudication changed trial conclusions. Given that differential misclassification should not occur in trials with sufficient blinding, these results suggest that central adjudication is of most use in studies with unblinded outcome assessment. Conclusion: For trials without adequate blinding, central adjudication is vital to control for differential misclassification. However, for large blinded trials, adjudication is of less importance and may not be necessary.


2017 ◽  
Vol 28 (1) ◽  
pp. 151-169
Author(s):  
Abderrahim Oulhaj ◽  
Anouar El Ghouch ◽  
Rury R Holman

Composite endpoints are frequently used in clinical outcome trials to provide more endpoints, thereby increasing statistical power. A key requirement for a composite endpoint to be meaningful is the absence of the so-called qualitative heterogeneity to ensure a valid overall interpretation of any treatment effect identified. Qualitative heterogeneity occurs when individual components of a composite endpoint exhibit differences in the direction of a treatment effect. In this paper, we develop a general statistical method to test for qualitative heterogeneity, that is to test whether a given set of parameters share the same sign. This method is based on the intersection–union principle and, provided that the sample size is large, is valid whatever the model used for parameters estimation. We propose two versions of our testing procedure, one based on a random sampling from a Gaussian distribution and another version based on bootstrapping. Our work covers both the case of completely observed data and the case where some observations are censored which is an important issue in many clinical trials. We evaluated the size and power of our proposed tests by carrying out some extensive Monte Carlo simulations in the case of multivariate time to event data. The simulations were designed under a variety of conditions on dimensionality, censoring rate, sample size and correlation structure. Our testing procedure showed very good performances in terms of statistical power and type I error. The proposed test was applied to a data set from a single-center, randomized, double-blind controlled trial in the area of Alzheimer’s disease.


2018 ◽  
Vol 7 (6) ◽  
pp. 81
Author(s):  
Fang Fang ◽  
Yong Lin ◽  
Weichung Joe Shih ◽  
Shou-En Lu ◽  
Guangrui Zhu

The accuracy of the treatment effect estimation is crucial to the success of Phase 3 studies. The calculation of sample size relies on the treatment effect estimation and cannot be changed during the trial in a fixed sample size design. Oftentimes, with limited efficacy data available from early phase studies and relevant historical studies, the sample size calculation may not accurately reflect the true treatment effect. Several adaptive designs have been proposed to address this uncertainty in the sample size calculation. These adaptive designs provide flexibility of sample size adjustment during the trial by allowing early trial stopping or sample size adjustment at interim look(s). The use of adaptive designs can optimize the trial performance when the treatment effect is an assumed constant value. However in practice, it may be more reasonable to consider the treatment effect within an interval rather than as a point estimate. Because proper selection of adaptive designs may decrease the failure rate of Phase 3 clinical trials and increase the chance for new drug approval, this paper proposes measures and evaluates the performance of different adaptive designs based on treatment effect intervals, and identifies factors that may affect the performance of adaptive designs.


Sign in / Sign up

Export Citation Format

Share Document