Comparative effectiveness reviews and the impact of funding bias

2010 ◽  
Vol 63 (6) ◽  
pp. 589-590 ◽  
Author(s):  
Gerald Gartlehner ◽  
Anthony Fleg
2018 ◽  
Vol 21 (4) ◽  
pp. e10-e10 ◽  
Author(s):  
Benedetto Vitiello ◽  
Chiara Davico

The systematic assessment of the efficacy and safety of psychiatric medications in children and adolescents started about 20 years ago. Since then, a considerable number of randomised clinical trials have been conducted, including also a series of publicly funded comparative effectiveness studies to evaluate the therapeutic benefit of medications relative to psychosocial interventions, alone or combined with medications. On the whole, these studies have been informative of the paediatric pharmacokinetics, efficacy and safety of the most commonly used psychotropics. As a consequence, a number of meta-analyses have been conducted that have documented both the benefits and harms of the most common medication groups, such as stimulants, antidepressants and antipsychotics. Evidence-based practice guidelines have been produced, and clinicians can now better estimate the therapeutic value and the risk of treatment, at least at the group mean level. However, most clinical trials have been conducted in research settings, and this limits the generalisability of the results. There is a need for evaluating treatment effects under usual practice conditions, through practical trials. The ongoing debate about the proper role of pharmacotherapy in child mental health can be advanced by comparative effectiveness research to assess the benefit/risk ratio of pharmacotherapy vis-à-vis alternative treatment modalities. In addition, analyses of large population databases can better inform on the impact of early treatment on important distal outcomes, such as interpersonal functioning, social and occupational status, quality of life and risk for disability or mortality. Thus far, paediatric psychopharmacology has been mostly the application to children of medications that were serendipitously discovered and developed for adults. By focusing on the neurobiological mechanisms of child psychopathology, it may be possible to identify more precise pharmacological targets and arrive at a truly developmental psychopharmacology.


2018 ◽  
Vol 39 (5) ◽  
pp. 534-540 ◽  
Author(s):  
E. Yoko Furuya ◽  
Bevin Cohen ◽  
Haomiao Jia ◽  
Elaine L. Larson

OBJECTIVETo evaluate the impact of universal contact precautions (UCP) on rates of multidrug-resistant organisms (MDROs) in intensive care units (ICUs) over 9 yearsDESIGNRetrospective, nonrandomized observational studySETTINGAn 800-bed adult academic medical center in New York CityPARTICIPANTSAll patients admitted to 6 ICUs, 3 of which instituted UCP in 2007METHODSUsing a comparative effectiveness approach, we studied the longitudinal impact of UCP on MDRO incidence density rates, including methicillin-resistant Staphylococcus aureus, vancomycin-resistant enterococci, and carbapenem-resistant Klebsiella pneumoniae. Data were extracted from a clinical research database for 2006–2014. Monthly MDRO rates were compared between the baseline period and the UCP period, utilizing time series analyses based on generalized linear models. The same models were also used to compare MDRO rates in the 3 UCP units to 3 ICUs without UCPs.RESULTSOverall, MDRO rates decreased over time, but there was no significant decrease in the trend (slope) during the UCP period compared to the baseline period for any of the 3 intervention units. Furthermore, there was no significant difference between UCP units (6.6% decrease in MDRO rates per year) and non-UCP units (6.0% decrease per year; P=.840).CONCLUSIONThe results of this 9-year study suggest that decreases in MDROs, including multidrug-resistant gram-negative bacilli, were more likely due to hospital-wide improvements in infection prevention during this period and that UCP had no detectable additional impact.Infect Control Hosp Epidemiol 2018;39:534–540


2019 ◽  
Vol 40 (Supplement_1) ◽  
Author(s):  
A Pawar ◽  
E Patorno ◽  
A Deruaz-Luyet ◽  
K Brodovicz ◽  
A Ustyugova ◽  
...  

Abstract Background Empagliflozin (EMPA) reduced the risk of hospitalization for heart failure (HHF) (HR 0.65; 95% CI 0.50- 0.85) as demonstrated in the EMPA-REG OUTCOME trial in adults with type 2 diabetes (T2D) and established CV disease. However, the impact of EMPA treatment initiation on healthcare resource utilization (HCRU) in routine care in patients with history of heart failure (HF) or without history of HF remains unexplored. Purpose To compare HCRU among EMPA and dipeptidyl peptidase-4 inhibitor (DPP4i) initiators with and without HF history at time of treatment initiation. Methods We analyzed HCRU in the first two years after marketing of EMPA as part of EMPRISE, a non-interventional study on the comparative effectiveness, safety and HCRU of EMPA for T2D patients in routine care in two US commercial and Medicare claims datasets (08/2014–09/2016). We identified a 1:1 propensity-score-matched cohort of T2D patients ≥18 years initiating either EMPA or a DPP4i with and without baseline HF, and assessed the balance at baseline (period of 365 days) on ≥140 covariates including clinical, HCRU, and cost-related covariates using absolute standardized differences. We compared the risk of first all-cause hospitalization, risk of first HHF, risk of first emergency department (ED) visit, hospital length of stay (LOS), HF-related LOS, number of hospital admissions, HF-related hospital admissions, office visits, and ED visits in EMPA and DDP4i initiators. Results After propensity score matching, we identified 2,050 pairs with HF and 15,428 pairs without HF in the three datasets with mean follow-up of 5.2 and 5.4 months, respectively. All baseline characteristics were well balanced (with aSD<0.1). Compared to patients without HF history, patients with HF were older (65 vs 58), more commonly female (51% vs 46%), and had CV history (64% vs 19%) (Table 1). Compared to DPP4i, the hazard ratio (HR) for first hospitalization was 0.68 (95% CI: 0.56, 0.83) for EMPA initiators with HF, and 0.89 (95% CI: 0.80, 1.00) for initiators without HF. Risk of HF-related hospitalization and ED visit was lower in EMPA initiators with prior HF [HR=0.53 (0.38, 0.74) and HR=0.73 (0.58, 0.93), respectively] and without prior HF [HR=0.45 (0.27, 0.73) and HR=0.82 (0.70, 0.95), respectively]. Compared to DPP4i initiators, EMPA initiators with and without baseline HF had lower number of all hospital admissions [Incidence rate ratio (IRR)= 0.59 (0.50, 0.70) and IRR= 0.78 (0.71, 0.85), respectively] and HF-related hospital admissions [IRR=0.49 (0.37, 0.65) and IRR=0.34 (0.22, 0.53), respectively]. In-hospital days and HF-related in-hospital days per member per year (PMPY) in patients with and without HF history initiating EMPA were lower than DDP4i (Table 1). Conclusions Results observed in this interim analysis of EMPRISE showed reduction in overall HCRU as well as HF-related HCRU in both patients with and without heart failure (HF) initiating EMPA compared to DDP4i initiators. Acknowledgement/Funding This study was supported by a research grant to the Brigham and Women's Hospital from Boehringer-Ingelheim.


2019 ◽  
Vol 90 (4) ◽  
pp. 469-473 ◽  
Author(s):  
Kevin K Kumar ◽  
Geoffrey Appelboom ◽  
Layton Lamsam ◽  
Arthur L Caplan ◽  
Nolan R Williams ◽  
...  

BackgroundThe safety and efficacy of neuroablation (ABL) and deep brain stimulation (DBS) for treatment refractory obsessive-compulsive disorder (OCD) has not been examined. This study sought to generate a definitive comparative effectiveness model of these therapies.MethodsA EMBASE/PubMed search of English-language, peer-reviewed articles reporting ABL and DBS for OCD was performed in January 2018. Change in quality of life (QOL) was quantified based on the Yale-Brown Obsessive Compulsive Scale (Y-BOCS) and the impact of complications on QOL was assessed. Mean response of Y-BOCS was determined using random-effects, inverse-variance weighted meta-analysis of observational data.FindingsAcross 56 studies, totalling 681 cases (367 ABL; 314 DBS), ABL exhibited greater overall utility than DBS. Pooled ability to reduce Y-BOCS scores was 50.4% (±22.7%) for ABL and was 40.9% (±13.7%) for DBS. Meta-regression revealed no significant change in per cent improvement in Y-BOCS scores over the length of follow-up for either ABL or DBS. Adverse events occurred in 43.6% (±4.2%) of ABL cases and 64.6% (±4.1%) of DBS cases (p<0.001). Complications reduced ABL utility by 72.6% (±4.0%) and DBS utility by 71.7% (±4.3%). ABL utility (0.189±0.03) was superior to DBS (0.167±0.04) (p<0.001).InterpretationOverall, ABL utility was greater than DBS, with ABL showing a greater per cent improvement in Y-BOCS than DBS. These findings help guide success thresholds in future clinical trials for treatment refractory OCD.


2019 ◽  
Vol 38 (9) ◽  
pp. 2289-2294
Author(s):  
Clemens M. Rosenbaum ◽  
Tina Pham ◽  
Roland Dahlem ◽  
Valentin Maurer ◽  
Philip Marks ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document