scholarly journals Assessing The Performance of Physician’s Prescribing Preference As An Instrumental Variable in Comparative Effectiveness Research With Moderate and Small Sample Sizes: A Simulation Study

Author(s):  
Lisong Zhang ◽  
Jim Lewsey ◽  
David McAllister

Abstract BackgroundInstrumental variable (IV) analyses are used to account for unmeasured confounding in Comparative Effectiveness Research (CER) in pharmacoepidemiology. To date, simulation studies assessing the performance of IV analyses have been based on large samples. However, in many settings, sample sizes are not large.Objective In this simulation study, we assess the utility of Physician’s Prescribing Preference (PPP) as an IV for moderate and smaller sample sizes.MethodsWe designed a simulation study in a CER setting with moderate (around 2500) and small (around 600) sample sizes. The outcome and treatment variables were binary and three variables were used to represent confounding (a binary and a continuous variable representing measured confounding, and a further continuous variable representing unmeasured confounding). We compare the performance of IV and non-IV approaches using two-stage least squares (2SLS) and ordinary least squares (OLS) methods, respectively. Further, we test the performance of different forms of proxies for PPP as an IV.ResultsThe PPP IV approach results in a percent bias of approximately 20%, while the percent bias of OLS is close to 60%. The sample size is not associated with the level of bias for the PPP IV approach. However, smaller sample sizes led to lower statistical power for the PPP IV. Using proxies for PPP based on longer prescription histories result in stronger IVs, partly offsetting the effect on power of smaller sample sizes.Conclusion Irrespective of sample size, the PPP IV approach leads to less biased estimates of treatment effectiveness than conventional multivariable regression adjusting for known confounding only. Particularly for smaller sample sizes, we recommend constructing PPP from long prescribing histories to improve statistical power.

2017 ◽  
Vol 28 (2) ◽  
pp. 626-640 ◽  
Author(s):  
Moonseong Heo ◽  
Paul Meissner ◽  
Alain H Litwin ◽  
Julia H Arnsten ◽  
M Diane McKee ◽  
...  

Comparative effectiveness research trials in real-world settings may require participants to choose between preferred intervention options. A randomized clinical trial with parallel experimental and control arms is straightforward and regarded as a gold standard design, but by design it forces and anticipates the participants to comply with a randomly assigned intervention regardless of their preference. Therefore, the randomized clinical trial may impose impractical limitations when planning comparative effectiveness research trials. To accommodate participants’ preference if they are expressed, and to maintain randomization, we propose an alternative design that allows participants’ preference after randomization, which we call a “preference option randomized design (PORD)”. In contrast to other preference designs, which ask whether or not participants consent to the assigned intervention after randomization, the crucial feature of preference option randomized design is its unique informed consent process before randomization. Specifically, the preference option randomized design consent process informs participants that they can opt out and switch to the other intervention only if after randomization they actively express the desire to do so. Participants who do not independently express explicit alternate preference or assent to the randomly assigned intervention are considered to not have an alternate preference. In sum, preference option randomized design intends to maximize retention, minimize possibility of forced assignment for any participants, and to maintain randomization by allowing participants with no or equal preference to represent random assignments. This design scheme enables to define five effects that are interconnected with each other through common design parameters—comparative, preference, selection, intent-to-treat, and overall/as-treated—to collectively guide decision making between interventions. Statistical power functions for testing all these effects are derived, and simulations verified the validity of the power functions under normal and binomial distributions.


2014 ◽  
pp. n/a-n/a
Author(s):  
Krista F. Huybrechts ◽  
Tobias Gerhard ◽  
Jessica M. Franklin ◽  
Raisa Levin ◽  
Stephen Crystal ◽  
...  

2014 ◽  
Vol 161 (2) ◽  
pp. 131 ◽  
Author(s):  
Laura Faden Garabedian ◽  
Paula Chu ◽  
Sengwee Toh ◽  
Alan M. Zaslavsky ◽  
Stephen B. Soumerai

Sign in / Sign up

Export Citation Format

Share Document