Methodological limitations of comparative effectiveness research on antidepressants: a simulation study

Author(s):  
Astrid Chevance
2016 ◽  
Vol 35 (26) ◽  
pp. 4824-4836 ◽  
Author(s):  
Xiaojuan Mi ◽  
Bradley G. Hammill ◽  
Lesley H. Curtis ◽  
Edward Chia-Cheng Lai ◽  
Soko Setoguchi

2021 ◽  
Author(s):  
Lisong Zhang ◽  
Jim Lewsey ◽  
David McAllister

Abstract BackgroundInstrumental variable (IV) analyses are used to account for unmeasured confounding in Comparative Effectiveness Research (CER) in pharmacoepidemiology. To date, simulation studies assessing the performance of IV analyses have been based on large samples. However, in many settings, sample sizes are not large.Objective In this simulation study, we assess the utility of Physician’s Prescribing Preference (PPP) as an IV for moderate and smaller sample sizes.MethodsWe designed a simulation study in a CER setting with moderate (around 2500) and small (around 600) sample sizes. The outcome and treatment variables were binary and three variables were used to represent confounding (a binary and a continuous variable representing measured confounding, and a further continuous variable representing unmeasured confounding). We compare the performance of IV and non-IV approaches using two-stage least squares (2SLS) and ordinary least squares (OLS) methods, respectively. Further, we test the performance of different forms of proxies for PPP as an IV.ResultsThe PPP IV approach results in a percent bias of approximately 20%, while the percent bias of OLS is close to 60%. The sample size is not associated with the level of bias for the PPP IV approach. However, smaller sample sizes led to lower statistical power for the PPP IV. Using proxies for PPP based on longer prescription histories result in stronger IVs, partly offsetting the effect on power of smaller sample sizes.Conclusion Irrespective of sample size, the PPP IV approach leads to less biased estimates of treatment effectiveness than conventional multivariable regression adjusting for known confounding only. Particularly for smaller sample sizes, we recommend constructing PPP from long prescribing histories to improve statistical power.


2012 ◽  
Vol 30 (34) ◽  
pp. 4223-4232 ◽  
Author(s):  
Lisa M. McShane ◽  
Daniel F. Hayes

Clinical management decisions for patients with cancer are increasingly being guided by prognostic and predictive markers. Use of these markers should be based on a sufficiently comprehensive body of unbiased evidence to establish that benefits to patients outweigh harms and to justify expenditure of health care dollars. Careful assessments of the clinical utility of markers by using comparative effectiveness research methods are urgently needed to more rigorously summarize and evaluate the evidence, but multiple factors have made such assessments difficult. The literature on tumor markers is plagued by nonpublication bias, selective reporting, and incomplete reporting. Several measures to address these problems are discussed, including development of a tumor marker study registry, greater attention to assay analytic performance and specimen quality, use of more rigorous study designs and analysis plans to establish clinical utility, and adherence to higher standards for reporting tumor marker studies. More complete and transparent reporting by adhering to criteria such as BRISQ [Biospecimen Reporting for Improved Study Quality] criteria for reporting details about specimens and REMARK [Reporting Recommendations for Tumor Marker Prognostic Studies] criteria for reporting a multitude of aspects relating to study design, analysis, and results, is essential for reliable assessment of study quality, detection of potential biases, and proper interpretation of study findings. Adopting these measures will improve the quality of the body of evidence available for comparative effectiveness research and enhance the ability to establish the clinical utility of prognostic and predictive tumor markers.


Sign in / Sign up

Export Citation Format

Share Document