scholarly journals Assessing the performance of population adjustment methods for anchored indirect comparisons: A simulation study

2020 ◽  
Vol 39 (30) ◽  
pp. 4885-4911
Author(s):  
David M. Phillippo ◽  
Sofia Dias ◽  
A. E. Ades ◽  
Nicky J. Welton
2019 ◽  
Vol 35 (03) ◽  
pp. 221-228 ◽  
Author(s):  
David M. Phillippo ◽  
Sofia Dias ◽  
Ahmed Elsada ◽  
A. E. Ades ◽  
Nicky J. Welton

AbstractObjectivesIndirect comparisons via a common comparator (anchored comparisons) are commonly used in health technology assessment. However, common comparators may not be available, or the comparison may be biased due to differences in effect modifiers between the included studies. Recently proposed population adjustment methods aim to adjust for differences between study populations in the situation where individual patient data are available from at least one study, but not all studies. They can also be used when there is no common comparator or for single-arm studies (unanchored comparisons). We aim to characterise the use of population adjustment methods in technology appraisals (TAs) submitted to the United Kingdom National Institute for Health and Care Excellence (NICE).MethodsWe reviewed NICE TAs published between 01/01/2010 and 20/04/2018.ResultsPopulation adjustment methods were used in 7 percent (18/268) of TAs. Most applications used unanchored comparisons (89 percent, 16/18), and were in oncology (83 percent, 15/18). Methods used included matching-adjusted indirect comparisons (89 percent, 16/18) and simulated treatment comparisons (17 percent, 3/18). Covariates were included based on: availability, expert opinion, effective sample size, statistical significance, or cross-validation. Larger treatment networks were commonplace (56 percent, 10/18), but current methods cannot account for this. Appraisal committees received results of population-adjusted analyses with caution and typically looked for greater cost effectiveness to minimise decision risk.ConclusionsPopulation adjustment methods are becoming increasingly common in NICE TAs, although their impact on decisions has been limited to date. Further research is needed to improve upon current methods, and to investigate their properties in simulation studies.


PLoS ONE ◽  
2011 ◽  
Vol 6 (1) ◽  
pp. e16237 ◽  
Author(s):  
Edward J. Mills ◽  
Isabella Ghement ◽  
Christopher O'Regan ◽  
Kristian Thorlund

2020 ◽  
Vol 40 (5) ◽  
pp. 644-654
Author(s):  
Dorothea Weber ◽  
Katrin Jensen ◽  
Meinhard Kieser

Objective. In evidence synthesis, therapeutic options have to be compared despite the lack of head-to-head trials. Indirect comparisons are then widely used, although little is known about their performance in situations where cross-trial differences or effect modification are present. Methods. We contrast the matching adjusted indirect comparison (MAIC), simulated treatment comparison (STC), and the method of Bucher using a simulation study. The different methods are evaluated according to their power and type I error rate as well as with respect to the coverage, bias, and the root mean squared error (RMSE) of the effect estimate for practically relevant scenarios using binary and time-to-event endpoints. In addition, we investigate how the power planned for the head-to-head trials influences the actual power of the indirect comparison. Results. Indirect comparisons are considerably underpowered. None of the methods had substantially superior performance. In situations without cross-trial differences and effect modification, MAIC and Bucher led to similar results, while Bucher has the advantage of preserving the within-study randomization. MAIC and STC could enhance power in some scenarios but at the cost of a potential type I error inflation. Adjusting MAIC and STC for confounders that did not modify the effect led to higher bias and RMSE. Conclusion. The choice of effect modifiers in MAIC and STC influences the precision of the indirect comparison. Therefore, a careful selection of effect modifiers is warranted. In addition, missed differences between trials may lead to low power and partly high bias for all considered methods, and thus, results of indirect comparisons should be interpreted with caution.


2013 ◽  
Vol 16 (3) ◽  
pp. A48 ◽  
Author(s):  
J. Signorovitch ◽  
R. Ayyagari ◽  
D. Cheng ◽  
E.Q. Wu

Sign in / Sign up

Export Citation Format

Share Document