scholarly journals The impact of surgical strategy and rifampin on treatment outcome in Cutibacterium periprosthetic joint infections

Author(s):  
Yvonne Achermann ◽  
Katharina Kusejko ◽  
Álvaro Auñón ◽  
Martin Clauss ◽  
Stéphane Corvec ◽  
...  

Abstract Background Cutibacterium species are common pathogens in periprosthetic joint infections (PJI). These infections are often treated with β-lactams or clindamycin as monotherapy, or in combination with rifampin. Clinical evidence supporting the value of adding rifampin for treatment of Cutibacterium PJI is lacking. Materials/methods In this multicenter retrospective study, we evaluated patients with Cutibacterium PJI. The primary endpoint was clinical success, defined by the absence of infection relapse or new infection within a minimal follow-up of 12 months. We used Fisher’s exact tests and Cox proportional hazards models to analyze the effect of rifampin and other factors on clinical success after PJI. Results We included 187 patients (72.2% male, median age 67 years) with a median follow-up of 36 months. The surgical intervention was two-stage exchange in 95 (50.8%), one-stage exchange in 51 (27.3%), debridement and implant retention (DAIR) in 34 (18.2%), and explantation without reimplantation in 7 (3.7%). Rifampin was included in the antibiotic regimen in 81 (43.3%) cases. Infection relapse occurred in 28 (15.0%), and new infection in 13 (7.0%) cases. In the time-to-event analysis, DAIR (adjusted HR=2.15, p=0.03) and antibiotic treatment over 6 weeks (adjusted HR=0.29, p=0.0002) significantly influenced treatment failure. We observed a tentative evidence for a beneficial effect of adding rifampin to the antibiotic treatment – though not statistically significant for treatment failure (adjusted HR=0.5, p=0.07) and not for relapses (adjusted HR=0.5, p=0.10). Conclusions We conclude that a rifampin combination is not markedly superior in Cutibacterium PJI but a dedicated prospective multicenter study is needed.

RMD Open ◽  
2019 ◽  
Vol 5 (2) ◽  
pp. e001015 ◽  
Author(s):  
Fernando Pérez Ruiz ◽  
Pascal Richette ◽  
Austin G Stack ◽  
Ravichandra Karra Gurunath ◽  
Ma Jesus García de Yébenes ◽  
...  

ObjectiveTo determine the impact of achieving serum uric acid (sUA) of <0.36 mmol/L on overall and cardiovascular (CV) mortality in patients with gout.MethodsProspective cohort of patients with gout recruited from 1992 to 2017. Exposure was defined as the average sUA recorded during the first year of follow-up, dichotomised as ≤ or >0.36 mmol/L. Bivariate and multivariate Cox proportional hazards models were used to determine mortality risks, expressed HRs and 95% CIs.ResultsOf 1193 patients, 92% were men with a mean age of 60 years, 6.8 years’ disease duration, an average of three to four flares in the previous year, a mean sUA of 9.1 mg/dL at baseline and a mean follow-up 48 months; and 158 died. Crude mortality rates were significantly higher for an sUA of ≥0.36 mmol/L, 80.9 per 1000 patient-years (95% CI 59.4 to 110.3), than for an sUA of <0.36 mmol/L, 25.7 per 1000 patient-years (95% CI 21.3 to 30.9). After adjustment for age, sex, CV risk factors, previous CV events, observation period and baseline sUA concentration, an sUA of ≥0.36 mmol/L was associated with elevated overall mortality (HR=2.33, 95% CI 1.60 to 3.41) and CV mortality (HR=2.05, 95% CI 1.21 to 3.45).ConclusionsFailure to reach a target sUA level of 0.36 mmol/L in patients with hyperuricaemia of gout is an independent predictor of overall and CV-related mortality. Targeting sUA levels of <0.36 mmol/L should be a principal goal in these high-risk patients in order to reduce CV events and to extend patient survival.


2020 ◽  
pp. bjophthalmol-2020-316617
Author(s):  
Samuel Berchuck ◽  
Alessandro Jammal ◽  
Sayan Mukherjee ◽  
Tamara Somers ◽  
Felipe A Medeiros

AimsTo assess the impact of anxiety and depression in the risk of converting to glaucoma in a cohort of glaucoma suspects followed over time.MethodsThe study included a retrospective cohort of subjects with diagnosis of glaucoma suspect at baseline, extracted from the Duke Glaucoma Registry. The presence of anxiety and depression was defined based on electronic health records billing codes, medical history and problem list. Univariable and multivariable Cox proportional hazards models were used to obtain HRs for the risk of converting to glaucoma over time. Multivariable models were adjusted for age, gender, race, intraocular pressure measurements over time and disease severity at baseline.ResultsA total of 3259 glaucoma suspects followed for an average of 3.60 (2.05) years were included in our cohort, of which 911 (28%) were diagnosed with glaucoma during follow-up. Prevalence of anxiety and depression were 32% and 33%, respectively. Diagnoses of anxiety, or concomitant anxiety and depression were significantly associated with risk of converting to glaucoma over time, with adjusted HRs (95% CI) of 1.16 (1.01, 1.33) and 1.27 (1.07, 1.50), respectively.ConclusionA history of anxiety or both anxiety and depression in glaucoma suspects was associated with developing glaucoma during follow-up.


2016 ◽  
Vol 43 (2) ◽  
pp. 104-111 ◽  
Author(s):  
Dandara N. Spigolon ◽  
Thyago P. de Moraes ◽  
Ana E. Figueiredo ◽  
Ana Paula Modesto ◽  
Pasqual Barretti ◽  
...  

Background: Structured pre-dialysis care is associated with an increase in peritoneal dialysis (PD) utilization, but not with peritonitis risk, technical and patient survival. This study aimed at analyzing the impact of pre-dialysis care on these outcomes. Methods: All incident patients starting PD between 2004 and 2011 in a Brazilian prospective cohort were included in this analysis. Patients were divided into 2 groups: early pre-dialysis care (90 days of follow-up by a nephrology team); and late pre-dialysis care (absent or less than 90 days follow-up). The socio-demographic, clinical and biochemical characteristics between the 2 groups were compared. Risk factors for the time to the first peritonitis episode, technique failure and mortality based on Cox proportional hazards models. Results: Four thousand one hundred seven patients were included. Patients with early pre-dialysis care presented differences in gender (female - 47.0 vs. 51.1%, p = 0.01); race (white - 63.8 vs. 71.7%, p < 0.01); education (<4 years - 61.9 vs. 71.0%, p < 0.01), respectively, compared to late care. Patients with early pre-dialysis care presented a higher prevalence of comorbidities, lower levels of creatinine, phosphorus, and glucose with a significantly better control of hemoglobin and potassium serum levels. There was no impact of pre-dialysis care on peritonitis rates (hazard ratio (HR) 0.88; 95% CI 0.77-1.01) and technique survival (HR 1.12; 95% CI 0.92-1.36). Patient survival (HR 1.20; 95% CI 1.03-1.41) was better in the early pre-dialysis care group. Conclusion: Earlier pre-dialysis care was associated with improved patient survival, but did not influence time to the first peritonitis nor technique survival in this national PD cohort.


2018 ◽  
Vol 2 (20) ◽  
pp. 2681-2690 ◽  
Author(s):  
Nikolai A. Podoltsev ◽  
Mengxin Zhu ◽  
Amer M. Zeidan ◽  
Rong Wang ◽  
Xiaoyi Wang ◽  
...  

Abstract Current guidelines recommend therapeutic phlebotomy for all polycythemia vera (PV) patients and additional cytoreductive therapy (eg, hydroxyurea [HU]) for high-risk PV patients. Little is known about the impact of these therapies in the real-world setting. We conducted a retrospective cohort study of older adults diagnosed with PV from 2007 to 2013 using the linked Surveillance, Epidemiology, and End Results–Medicare database. Multivariable Cox proportional hazards models were used to assess the effect of phlebotomy and HU on overall survival (OS) and the occurrence of thrombotic events. Of 820 PV patients (median age = 77 years), 16.3% received neither phlebotomy nor HU, 23.0% were managed with phlebotomy only, 19.6% with HU only, and 41.1% with both treatments. After a median follow-up of 2.83 years, 37.2% (n = 305) of the patients died. Phlebotomy (yes/no; hazard ratio [HR] = 0.65; 95% confidence interval [CI], 0.51-0.81; P &lt; .01), increasing phlebotomy intensity (HR = 0.71; 95% CI, 0.65-0.79; P &lt; .01), and a higher proportion of days covered (PDC) by HU were all significantly associated with lower mortality. When thrombosis was the outcome of interest, phlebotomy (yes/no; HR = 0.52; 95% CI, 0.42-0.66; P &lt; .01) and increasing phlebotomy intensity (HR = 0.46; 95% CI, 0.29-0.74; P &lt; .01) were significantly associated with a lower risk of thrombotic events, so was a higher HU PDC. In this population-based study of older adults with PV reflecting contemporary clinical practice, phlebotomy and HU were associated with improved OS and decreased risk of thrombosis. However, both treatment modalities were underused in this cohort of older PV patients.


2021 ◽  
pp. 1-9
Author(s):  
Leonard Naymagon ◽  
Douglas Tremblay ◽  
John Mascarenhas

Data supporting the use of etoposide-based therapy in hemophagocytic lymphohistiocytosis (HLH) arise largely from pediatric studies. There is a lack of comparable data among adult patients with secondary HLH. We conducted a retrospective study to assess the impact of etoposide-based therapy on outcomes in adult secondary HLH. The primary outcome was overall survival. The log-rank test was used to compare Kaplan-Meier distributions of time-to-event outcomes. Multivariable Cox proportional hazards modeling was used to estimate adjusted hazard ratios (HRs) with 95% confidence intervals (CIs). Ninety adults with secondary HLH seen between January 1, 2009, and January 6, 2020, were included. Forty-two patients (47%) received etoposide-based therapy, while 48 (53%) received treatment only for their inciting proinflammatory condition. Thirty-three patients in the etoposide group (72%) and 32 in the no-etoposide group (67%) died during follow-up. Median survival in the etoposide and no-etoposide groups was 1.04 and 1.39 months, respectively. There was no significant difference in survival between the etoposide and no-etoposide groups (log-rank <i>p</i> = 0.4146). On multivariable analysis, there was no association between treatment with etoposide and survival (HR for death with etoposide = 1.067, 95% CI: 0.633–1.799, <i>p</i> = 0.8084). Use of etoposide-based therapy was not associated with improvement in outcomes in this large cohort of adult secondary HLH patients.


Author(s):  
Ma Cherrysse Ulsa ◽  
Xi Zheng ◽  
Peng Li ◽  
Arlen Gaba ◽  
Patricia M Wong ◽  
...  

Abstract Background Delirium is a distressing neurocognitive disorder recently linked to sleep disturbances. However, the longitudinal relationship between sleep and delirium remains unclear. This study assessed the associations of poor sleep burden, and its trajectory, with delirium risk during hospitalization. Methods 321,818 participants from the UK Biobank (mean age 58±8y[SD]; range 37-74y) reported (2006-2010) sleep traits (sleep duration, excessive daytime sleepiness, insomnia-type complaints, napping, and chronotype–a closely-related circadian measure for sleep timing), aggregated into a sleep burden score (0-9). New-onset delirium (n=4,775) was obtained from hospitalization records during 12y median follow-up. 42,291 (mean age 64±8; range 44-83y) had repeat sleep assessment on average 8y after their first. Results In the baseline cohort, Cox proportional hazards models showed that moderate (aggregate scores=4-5) and severe (scores=6-9) poor sleep burden groups were 18% (hazard ratio 1.18 [95% confidence interval 1.08-1.28], p&lt;0.001) and 57% (1.57 [1.38-1.80], p&lt;0.001), more likely to develop delirium respectively. The latter risk magnitude is equivalent to two additional cardiovascular risks. These findings appeared robust when restricted to postoperative delirium and after exclusion of underlying dementia. Higher sleep burden was also associated with delirium in the follow-up cohort. Worsening sleep burden (score increase ≥2 vs. no change) further increased the risk for delirium (1.79 [1.23-2.62], p=0.002) independent of their baseline sleep score and time-lag. The risk was highest in those under 65y at baseline (p for interaction &lt;0.001). Conclusion Poor sleep burden and worsening trajectory were associated with increased risk for delirium; promotion of sleep health may be important for those at higher risk.


2021 ◽  
Vol 8 ◽  
Author(s):  
Augusto Di Castelnuovo ◽  
Simona Costanzo ◽  
Andrea Antinori ◽  
Nausicaa Berselli ◽  
Lorenzo Blandi ◽  
...  

Background: Protease inhibitors have been considered as possible therapeutic agents for COVID-19 patients.Objectives: To describe the association between lopinavir/ritonavir (LPV/r) or darunavir/cobicistat (DRV/c) use and in-hospital mortality in COVID-19 patients.Study Design: Multicenter observational study of COVID-19 patients admitted in 33 Italian hospitals. Medications, preexisting conditions, clinical measures, and outcomes were extracted from medical records. Patients were retrospectively divided in three groups, according to use of LPV/r, DRV/c or none of them. Primary outcome in a time-to event analysis was death. We used Cox proportional-hazards models with inverse probability of treatment weighting by multinomial propensity scores.Results: Out of 3,451 patients, 33.3% LPV/r and 13.9% received DRV/c. Patients receiving LPV/r or DRV/c were more likely younger, men, had higher C-reactive protein levels while less likely had hypertension, cardiovascular, pulmonary or kidney disease. After adjustment for propensity scores, LPV/r use was not associated with mortality (HR = 0.94, 95% CI 0.78 to 1.13), whereas treatment with DRV/c was associated with a higher death risk (HR = 1.89, 1.53 to 2.34, E-value = 2.43). This increased risk was more marked in women, in elderly, in patients with higher severity of COVID-19 and in patients receiving other COVID-19 drugs.Conclusions: In a large cohort of Italian patients hospitalized for COVID-19 in a real-life setting, the use of LPV/r treatment did not change death rate, while DRV/c was associated with increased mortality. Within the limits of an observational study, these data do not support the use of LPV/r or DRV/c in COVID-19 patients.


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 4578-4578
Author(s):  
Bradley Alexander McGregor ◽  
Daniel M. Geynisman ◽  
Mauricio Burotto ◽  
Camillo Porta ◽  
Cristina Suarez Rodriguez ◽  
...  

4578 Background: Nivolumab in combination with cabozantinib (N+C) has demonstrated significantly improved progression-free survival (PFS), objective response rate (ORR), and overall survival (OS), compared with sunitinib as a first-line (1L) treatment for aRCC in the phase 3 CheckMate (CM) 9ER trial. As there are no head-to-head trials comparing N+C with pembrolizumab in combination with axitinib (P+A), this study compared the efficacy of N+C with P+A as 1L treatment in aRCC. Methods: An MAIC was conducted using individual patient data on N+C (N = 323) from the CM 9ER trial (median follow-up: 23.5 months) and published data on P+A (N = 432) from the KEYNOTE (KN)-426 trialof P+A (median follow-up: 30.6 months). Individual patients within the CM 9ER trial population were reweighted to match the key patient characteristics published in KN-426 trial, including age, gender, previous nephrectomy, International Metastatic RCC Database Consortium risk score, and sites of metastasis. After weighting, hazards ratios (HR) of PFS, duration of response (DoR), and OS comparing N+C vs. P+A were estimated using weighted Cox proportional hazards models, and ORR was compared using a weighted Wald test. All comparisons were conducted using the corresponding sunitinib arms as an anchor. Results: After weighting, patient characteristics in the CM 9ER trial were comparable to those in the KN-426 trial. In the weighted population, N+C had a median PFS of 19.3 months (95% CI: 15.2, 22.4) compared to a median PFS of 15.7 months (95% CI: 13.7, 20.6) for P+A. Using sunitinib as an anchor arm, N+C was associated with a 30% reduction in risk of progression or death compared to P+A, (HR: 0.70, 95% CI: 0.53, 0.93; P = 0.015; table). In addition, N+C was associated with numerically, although not statistically, higher improvement in ORR vs sunitinib (difference: 8.4%, 95% CI: -1.7%, 18.4%; P = 0.105) and improved DoR (HR: 0.79; 95% CI: 0.47, 1.31; P = 0.359). Similar OS outcomes were observed for N+C and P+A (HR: 0.99; 95% CI: 0.67, 1.44; P = 0.940). Conclusions: After adjusting for cross-trial differences, N+C had a more favorable efficacy profile compared to P+A, including statistically significant PFS benefits, numerically improved ORR and DoR, and similar OS.[Table: see text]


Stroke ◽  
2020 ◽  
Vol 51 (Suppl_1) ◽  
Author(s):  
Adam H de Havenon ◽  
Ka-Ho Wong ◽  
Eva Mistry ◽  
Mohammad Anadani ◽  
Shadi Yaghi ◽  
...  

Background: Increased blood pressure variability (BPV) has been associated with stroke risk, but never specifically in patients with diabetes. Methods: This is a secondary analysis of the Action to Control Cardiovascular Risk in Diabetes Follow-On Study (ACCORDION), the long term follow-up extension of ACCORD. Visit-to-visit BPV was analyzed using all BP readings during the first 36 months. The primary outcome was incident ischemic or hemorrhagic stroke after 36 months. Differences in mean BPV was tested with Student’s t-test. We fit Cox proportional hazards models to estimate the adjusted risk of stroke across lowest vs. highest quintile of BPV and report hazard ratios along with 95% confidence intervals (CI). Results: Our analysis included 9,241 patients, with a mean (SD) age of 62.7 (6.6) years and 61.7% were male. Mean (SD) follow-up was 5.7 (2.4) years and number of BP readings per patient was 12.0 (4.3). Systolic, but not diastolic, BPV was higher in patients who developed stroke (Table 1). The highest quintile of SBP SD was associated with increased risk of incident stroke, independent of mean blood pressure or other potential confounders. (Table 2, Figure 1). There was no interaction between SBP SD and treatment arm assignment, although the interaction for glucose approached significance (Table 2). Conclusion: Higher systolic BPV was associated with incident stroke in a large cohort of diabetic patients. Future trials of stroke prevention may benefit from interventions targeting BPV reduction.


2020 ◽  
Vol 14 (10) ◽  
pp. 1354-1363 ◽  
Author(s):  
Laura E Targownik ◽  
Eric I Benchimol ◽  
Charles N Bernstein ◽  
Harminder Singh ◽  
Aruni Tennakoon ◽  
...  

Abstract Background and Aims The combination of infliximab and azathioprine is more efficacious than either therapy alone for Crohn’s disease [CD] and ulcerative colitis [UC]. However, it is uncertain whether these benefits extend to real-world clinical practice and to other combinations of biologics and immunomodulators. Methods We collected health administrative data from four Canadian provinces representing 78 413 patients with inflammatory bowel disease [IBD] of whom 11 244 were prescribed anti-tumour necrosis factor [anti-TNF] agents. The outcome of interest was the first occurrence of treatment failure: an unplanned IBD-related hospitalization, IBD-related resective surgery, new/recurrent corticosteroid use or anti-TNF switch. Multivariable Cox proportional hazards modelling was used to assess the association between the outcome of interest and receiving combination therapy vs anti-TNF monotherapy. Multivariable regression models were used to assess the impact of choice of immunomodulator or biologic on reaching the composite outcome, and random effects generic inverse variance meta-analysis of deterministically linked data was used to pool the results from the four provinces to obtain aggregate estimates of effect. Results In comparison with anti-TNF monotherapy, combination therapy was associated with a significant decrease in treatment ineffectiveness for both CD and UC (CD: adjusted hazard ratio [aHR] 0.77, 95% confidence interval [CI] 0.66–0.90; UC: aHR 0.72, 95% CI 0.62–0.84). Combination therapy was equally effective for adalimumab and infliximab in CD. In UC azathioprine was superior to methotrexate as the immunomodulatory agent (aHR = 1.52 [95% CI 1.02–2.28]) but not CD (aHR = 1.22 [95% CI 0.96–1.54]). Conclusion In an analysis of a database of real-world patients with IBD, combination therapy decreased the likelihood of treatment failure in both CD and UC.


Sign in / Sign up

Export Citation Format

Share Document