Using Confidence Intervals to Quantify Statistical and Clinical Evidence for the Treatment Effect in a Comparative Study—Moving Beyond P Values

2020 ◽  
Vol 146 (1) ◽  
pp. 5
Author(s):  
Dustin J. Rabideau ◽  
Dae Hyun Kim ◽  
Lee-Jen Wei
2021 ◽  
pp. 174077452098193
Author(s):  
Nancy A Obuchowski ◽  
Erick M Remer ◽  
Ken Sakaie ◽  
Erika Schneider ◽  
Robert J Fox ◽  
...  

Background/aims Quantitative imaging biomarkers have the potential to detect change in disease early and noninvasively, providing information about the diagnosis and prognosis of a patient, aiding in monitoring disease, and informing when therapy is effective. In clinical trials testing new therapies, there has been a tendency to ignore the variability and bias in quantitative imaging biomarker measurements. Unfortunately, this can lead to underpowered studies and incorrect estimates of the treatment effect. We illustrate the problem when non-constant measurement bias is ignored and show how treatment effect estimates can be corrected. Methods Monte Carlo simulation was used to assess the coverage of 95% confidence intervals for the treatment effect when non-constant bias is ignored versus when the bias is corrected for. Three examples are presented to illustrate the methods: doubling times of lung nodules, rates of change in brain atrophy in progressive multiple sclerosis clinical trials, and changes in proton-density fat fraction in trials for patients with nonalcoholic fatty liver disease. Results Incorrectly assuming that the measurement bias is constant leads to 95% confidence intervals for the treatment effect with reduced coverage (<95%); the coverage is especially reduced when the quantitative imaging biomarker measurements have good precision and/or there is a large treatment effect. Estimates of the measurement bias from technical performance validation studies can be used to correct the confidence intervals for the treatment effect. Conclusion Technical performance validation studies of quantitative imaging biomarkers are needed to supplement clinical trial data to provide unbiased estimates of the treatment effect.


2019 ◽  
Vol 114 (528) ◽  
pp. 1854-1864 ◽  
Author(s):  
Fei Jiang ◽  
Lu Tian ◽  
Haoda Fu ◽  
Takahiro Hasegawa ◽  
L. J. Wei

Biometrika ◽  
2020 ◽  
Author(s):  
Oliver Dukes ◽  
Stijn Vansteelandt

Summary Eliminating the effect of confounding in observational studies typically involves fitting a model for an outcome adjusted for covariates. When, as often, these covariates are high-dimensional, this necessitates the use of sparse estimators, such as the lasso, or other regularization approaches. Naïve use of such estimators yields confidence intervals for the conditional treatment effect parameter that are not uniformly valid. Moreover, as the number of covariates grows with the sample size, correctly specifying a model for the outcome is nontrivial. In this article we deal with both of these concerns simultaneously, obtaining confidence intervals for conditional treatment effects that are uniformly valid, regardless of whether the outcome model is correct. This is done by incorporating an additional model for the treatment selection mechanism. When both models are correctly specified, we can weaken the standard conditions on model sparsity. Our procedure extends to multivariate treatment effect parameters and complex longitudinal settings.


Econometrics ◽  
2019 ◽  
Vol 7 (2) ◽  
pp. 26 ◽  
Author(s):  
David Trafimow

There has been much debate about null hypothesis significance testing, p-values without null hypothesis significance testing, and confidence intervals. The first major section of the present article addresses some of the main reasons these procedures are problematic. The conclusion is that none of them are satisfactory. However, there is a new procedure, termed the a priori procedure (APP), that validly aids researchers in obtaining sample statistics that have acceptable probabilities of being close to their corresponding population parameters. The second major section provides a description and review of APP advances. Not only does the APP avoid the problems that plague other inferential statistical procedures, but it is easy to perform too. Although the APP can be performed in conjunction with other procedures, the present recommendation is that it be used alone.


2015 ◽  
Vol 40 (1) ◽  
pp. 53-58 ◽  
Author(s):  
Camiel L.M. de Roij van Zuijdewijn ◽  
Menso J. Nubé ◽  
Piet M. ter Wee ◽  
Peter J. Blankestijn ◽  
Renée Lévesque ◽  
...  

Background/Aims: Treatment time is associated with survival in hemodialysis (HD) patients and with convection volume in hemodiafiltration (HDF) patients. High-volume HDF is associated with improved survival. Therefore, we investigated whether this survival benefit is explained by treatment time. Methods: Participants were subdivided into four groups: HD and tertiles of convection volume in HDF. Three Cox regression models were fitted to calculate hazard ratios (HRs) for mortality of HDF subgroups versus HD: (1) crude, (2) adjusted for confounders, (3) model 2 plus mean treatment time. As the only difference between the latter models is treatment time, any change in HRs is due to this variable. Results: 114/700 analyzed individuals were treated with high-volume HDF. HRs of high-volume HDF are 0.61, 0.62 and 0.64 in the three models, respectively (p values <0.05). Confidence intervals of models 2 and 3 overlap. Conclusion: The survival benefit of high-volume HDF over HD is independent of treatment time.


Sign in / Sign up

Export Citation Format

Share Document