Letter to the editor: A quantitative bias analysis to assess the impact of unmeasured confounding on associations between diabetes and periodontitis

Author(s):  
Raittio Eero ◽  
Nascimento G. Gustavo ◽  
Shamsoddin Erfan ◽  
Ashraf Javed
2020 ◽  
Vol 48 (1) ◽  
pp. 51-60
Author(s):  
Talal S. Alshihayb ◽  
Elizabeth A. Kaye ◽  
Yihong Zhao ◽  
Cataldo W. Leone ◽  
Brenda Heaton

Author(s):  
Samantha Wilkinson ◽  
Alind Gupta ◽  
Eric Mackay ◽  
Paul Arora ◽  
Kristian Thorlund ◽  
...  

IntroductionThe German health technology assessment (HTA) rejected additional benefit of alectinib for second line (2L) ALK+ NSCLC, citing possible biases from missing ECOG performance status data and unmeasured confounding in real-world evidence (RWE) for 2L ceritinib that was submitted as a comparator to the single arm alectinib trial. Alectinib was approved in the US and therefore US post-launch RWE can be used to evaluate this HTA decision.MethodsWe compared the real-world effectiveness of alectinib with ceritinib in 2L post-crizotinib ALK+ NSCLC using the nationwide Flatiron Health electronic health record (EHR)-derived de-identified database. Using quantitative bias analysis (QBA), we estimated the strength of (i) unmeasured confounding and (ii) deviation from missing-at-random (MAR) assumptions needed to nullify any overall survival (OS) benefit.ResultsAlectinib had significantly longer median OS than ceritinib in complete case analysis. The estimated effect size (Hazard Ratio: 0.55) was robust to risk ratios of unmeasured confounder-outcome and confounder-exposure associations of <2.4.Based on tipping point analysis, missing baseline ECOG performance status for ceritinib-treated patients (49% missing) would need to be more than 3.4-times worse than expected under MAR to nullify the OS benefit observed for alectinib.ConclusionsOnly implausible levels of bias reversed our conclusions. These methods could provide a framework to explore uncertainty and aid decision-making for HTAs to enable patient access to innovative therapies.


2020 ◽  
Vol 17 (1) ◽  
pp. 80-84
Author(s):  
Brigid M. Lynch ◽  
Suzanne C. Dixon-Suen ◽  
Andrea Ramirez Varela ◽  
Yi Yang ◽  
Dallas R. English ◽  
...  

Background: It is not always clear whether physical activity is causally related to health outcomes, or whether the associations are induced through confounding or other biases. Randomized controlled trials of physical activity are not feasible when outcomes of interest are rare or develop over many years. Thus, we need methods to improve causal inference in observational physical activity studies. Methods: We outline a range of approaches that can improve causal inference in observational physical activity research, and also discuss the impact of measurement error on results and methods to minimize this. Results: Key concepts and methods described include directed acyclic graphs, quantitative bias analysis, Mendelian randomization, and potential outcomes approaches which include propensity scores, g methods, and causal mediation. Conclusions: We provide a brief overview of some contemporary epidemiological methods that are beginning to be used in physical activity research. Adoption of these methods will help build a stronger body of evidence for the health benefits of physical activity.


Author(s):  
Tammy Jiang ◽  
Jaimie L Gradus ◽  
Timothy L Lash ◽  
Matthew P Fox

Abstract Although variables are often measured with error, the impact of measurement error on machine learning predictions is seldom quantified. The purpose of this study was to assess the impact of measurement error on random forest model performance and variable importance. First, we assessed the impact of misclassification (i.e., measurement error of categorical variables) of predictors on random forest model performance (e.g., accuracy, sensitivity) and variable importance (mean decrease in accuracy) using data from the United States National Comorbidity Survey Replication (2001 - 2003). Second, we simulated datasets in which we know the true model performance and variable importance measures and could verify that quantitative bias analysis was recovering the truth in misclassified versions of the datasets. Our findings show that measurement error in the data used to construct random forests can distort model performance and variable importance measures, and that bias analysis can recover the correct results. This study highlights the utility of applying quantitative bias analysis in machine learning to quantify the impact of measurement error on study results.


Sign in / Sign up

Export Citation Format

Share Document