Association between warfarin combined with serotonin-modulating antidepressants and increased case fatality in primary intracerebral hemorrhage: a population-based study

2014 ◽  
Vol 120 (6) ◽  
pp. 1358-1363 ◽  
Author(s):  
Pekka Löppönen ◽  
Sami Tetri ◽  
Seppo Juvela ◽  
Juha Huhtakangas ◽  
Pertti Saloheimo ◽  
...  

Object Patients receiving oral anticoagulants run a higher risk of cerebral hemorrhage with a poor outcome. Serotonin-modulating antidepressants (selective serotonin reuptake inhibitors [SSRIs], serotonin-norepinephrine reuptake inhibitors [SNRIs]) are frequently used in combination with warfarin, but it is unclear whether this combination of drugs influences outcome after primary intracerebral hemorrhage (PICH). The authors investigated case fatality in PICH among patients from a defined population who were receiving warfarin alone, with aspirin, or with serotonin-modulating antidepressants. Methods Nine hundred eighty-two subjects with PICH were derived from the population of Northern Ostrobothnia, Finland, for the years 1993–2008, and those with warfarin-associated PICH were eligible for analysis. Their hospital records were reviewed, and medication data were obtained from the national register of prescribed medicines. Kaplan-Meier survival curves were drawn to illustrate cumulative case fatality, and a Cox proportional-hazards analysis was performed to demonstrate predictors of death. Results Of the 176 patients eligible for analysis, 17 had been taking aspirin and 19 had been taking SSRI/SNRI together with warfarin. The 30-day case fatality rates were 50.7%, 58.8%, and 78.9%, respectively, for those taking warfarin alone, with aspirin, or with SSRI/SNRI (p = 0.033, warfarin plus SSRI/SNRI compared with warfarin alone). Warfarin combined with SSRI/SNRI was a significant independent predictor of case fatality (adjusted HR 2.10, 95% CI 1.13–3.92, p = 0.019). Conclusions Concurrent use of warfarin and a serotonin-modulating antidepressant, relative to warfarin alone, seemed to increase the case fatality rate for PICH. This finding should be taken into account if hematoma evacuation is planned.

Author(s):  
Majdi Imterat ◽  
Tamar Wainstock ◽  
Eyal Sheiner ◽  
Gali Pariente

Abstract Recent evidence suggests that a long inter-pregnancy interval (IPI: time interval between live birth and estimated time of conception of subsequent pregnancy) poses a risk for adverse short-term perinatal outcome. We aimed to study the effect of short (<6 months) and long (>60 months) IPI on long-term cardiovascular morbidity of the offspring. A population-based cohort study was performed in which all singleton live births in parturients with at least one previous birth were included. Hospitalizations of the offspring up to the age of 18 years involving cardiovascular diseases and according to IPI length were evaluated. Intermediate interval, between 6 and 60 months, was considered the reference. Kaplan–Meier survival curves were used to compare the cumulative morbidity incidence between the groups. Cox proportional hazards model was used to control for confounders. During the study period, 161,793 deliveries met the inclusion criteria. Of them, 14.1% (n = 22,851) occurred in parturient following a short IPI, 78.6% (n = 127,146) following an intermediate IPI, and 7.3% (n = 11,796) following a long IPI. Total hospitalizations of the offspring, involving cardiovascular morbidity, were comparable between the groups. The Kaplan–Meier survival curves demonstrated similar cumulative incidences of cardiovascular morbidity in all groups. In a Cox proportional hazards model, short and long IPI did not appear as independent risk factors for later pediatric cardiovascular morbidity of the offspring (adjusted HR 0.97, 95% CI 0.80–1.18; adjusted HR 1.01, 95% CI 0.83–1.37, for short and long IPI, respectively). In our population, extreme IPIs do not appear to impact long-term cardiovascular hospitalizations of offspring.


2019 ◽  
Vol 12 ◽  
pp. 117863611982797
Author(s):  
Laura H Thompson ◽  
Zoann Nugent ◽  
John L Wylie ◽  
Carla Loeppky ◽  
Paul Van Caeseele ◽  
...  

Objectives: The purpose of this study was to describe and explore potential driving factors of trends in reported chlamydia infections over time in Manitoba, Canada. Methods: Surveillance and laboratory testing data from Manitoba Health, Seniors and Active Living were analysed using SAS v9.4. Kaplan-Meier plots of time from the first to second chlamydia infection were constructed, and Cox proportional hazards regression was used to estimate the risk of second repeat chlamydia infections in males and females. Results: Overall, the number of reported infections found mirrored the number of tests conducted. From 2008 to 2014, the number of first infections found among females decreased as the number of first tests conducted among females also decreased. Between 2008 and 2012, the number of repeat tests among females increased and was accompanied by an increase in the number of repeat positive results from 2009 to 2013. From 2008 to 2016, the number of repeat tests and repeat positive results increased steadily among males. Conclusions: Chlamydia infection rates consistently included a subset composed of repeat infections. The number of cases identified appears to mirror testing volumes, drawing into question incidence calculations that do not include testing volumes. Summary Box: 1) What is the current understanding of this subject? Chlamydia incidence is high in Manitoba, particularly among young women and in northern Manitoba. 2) What does this report add to the literature? This report suggests that incidence calculated using case-based surveillance data alone does not provide an accurate estimate of chlamydia incidence in Manitoba and is heavily influenced by testing patterns. 3) What are the implications for public health practice? In general, improving testing rates in clinical practices as well as through the provision of rapid services in non-clinical venues could result in higher screening and treatment rates. In turn, this could lead to a better understanding of true disease occurrence.


2006 ◽  
Vol 24 (18_suppl) ◽  
pp. 560-560 ◽  
Author(s):  
D. A. Patt ◽  
Z. Duan ◽  
G. Hortobagyi ◽  
S. H. Giordano

560 Background: Adjuvant chemotherapy for breast cancer is associated with the development of secondary AML, but this risk in an older population has not been previously quantified. Methods: We queried data from the Surveillance, Epidemiology, and End Results-Medicare (SEER-Medicare) database for women who were diagnosed with nonmetastatic breast cancer from 1992–1999. We compared the risk of AML in patients with and without adjuvant chemotherapy (C), and by differing C regimens. The primary endpoint was a claim with an inpatient or outpatient diagnosis of AML (ICD-09 codes 205–208). Risk of AML was estimated using the method of Kaplan-Meier. Cox proportional hazards models were used to determine factors independently associated with AML. Results: 36,904 patients were included in this observational study, 4,572 who had received adjuvant C and 32,332 who had not. The median patient age was 75.3 (66.0–103.3). The median follow up was 63 months (13–132). Patients who received C were significantly younger, had more advanced stage disease, and had lower comorbidity scores (p<0.001). The unadjusted risk of developing AML at 10 years after any adjuvant C for breast cancer was 1.6% versus 1.1% for women who had not received C. The adjusted HR for AML with adjuvant C was 1.72 (1.16–2.54) compared to women who did not receive C. HR for radiation was 1.21 (0.86–1.70). HR was higher with increasing age but p>0.05. An analysis was performed among women who received C. When compared to other C regimens, anthracycline-based therapy (A) conveyed a significantly higher hazard for AML HR 2.17 (1.08–4.38), while patients who received A plus taxanes (T) did not have a significant increase in risk HR1.29 (0.44–3.82) nor did patients who received T with some other C HR 1.50 (0.34–6.67). Another significant independent predictor of AML included GCSF use HR 2.21 (1.14–4.25). In addition, increasing A dose was associated with higher risk of AML (p<0.05). Conclusions: There is a small but real increase in AML after adjuvant chemotherapy for breast cancer in older women. The risk appears to be highest from A-based regimens, most of which also contained cyclophosphamide, and may be dose-dependent. T do not appear to increase risk. The role of GCSF should be further explored. No significant financial relationships to disclose.


2016 ◽  
Vol 26 (6) ◽  
pp. 664-671 ◽  
Author(s):  
T-C. Shen ◽  
C-L. Lin ◽  
C.H. Liao ◽  
C-C. Wei ◽  
F-C. Sung ◽  
...  

Aim.To examine the incidence of asthma in adult patients with major depressive disorder (MDD).Methods.From the National Health Insurance database of Taiwan, we identified 30 169 adult patients who were newly diagnosed with MDD between 2000 and 2010. Individuals without depression were randomly selected four times and frequency matched for sex, age and year of diagnosis. Both cohorts were followed-up for the occurrence of asthma up to the end of 2011. Adjusted hazard ratios (aHRs) of asthma were estimated using the Cox proportional hazards method.Results.The overall incidence of asthma was 1.91-fold higher in the MDD cohort than in the non-depression cohort (7.55 v. 3.96 per 1000 person-years), with an aHR of 1.66 (95% confidence interval (CI) 1.55–1.78). In both cohorts, the incidence of asthma was higher in patients and controls who were female, aged, with comorbidities and users of aspirin or beta-adrenergic receptor blockers. No significant difference was observed in the occurrence of asthma between patients with MDD treated with selective serotonin reuptake inhibitors (SSRIs) and those treated with non-SSRIs (SSRIs to non-SSRIs aHR = 1.03, 95% CI 0.91–1.17).Conclusion.Adult patients with MDD are at a higher risk of asthma than those without depression are.


BMJ Open ◽  
2017 ◽  
Vol 7 (9) ◽  
pp. e015101 ◽  
Author(s):  
Hsien-Feng Lin ◽  
Kuan-Fu Liao ◽  
Ching-Mei Chang ◽  
Cheng-Li Lin ◽  
Shih-Wei Lai

ObjectiveThis study aimed to investigate the association between splenectomy and empyema in Taiwan.MethodsA population-based cohort study was conducted using the hospitalisation dataset of the Taiwan National Health Insurance Program. A total of 13 193 subjects aged 20–84 years who were newly diagnosed with splenectomy from 2000 to 2010 were enrolled in the splenectomy group and 52 464 randomly selected subjects without splenectomy were enrolled in the non-splenectomy group. Both groups were matched by sex, age, comorbidities and the index year of undergoing splenectomy. The incidence of empyema at the end of 2011 was calculated. A multivariable Cox proportional hazards regression model was used to estimate the HR with 95% CI of empyema associated with splenectomy and other comorbidities.ResultsThe overall incidence rate of empyema was 2.56-fold higher in the splenectomy group than in the non-splenectomy group (8.85 vs 3.46 per 1000 person-years). The Kaplan-Meier analysis revealed a higher cumulative incidence of empyema in the splenectomy group than in the non-splenectomy group (6.99% vs 3.37% at the end of follow-up). After adjusting for confounding variables, the adjusted HR of empyema was 2.89 for the splenectomy group compared with that for the non-splenectomy group. Further analysis revealed that HR of empyema was 4.52 for subjects with splenectomy alone.ConclusionThe incidence rate ratio between the splenectomy and non-splenectomy groups reduced from 2.87 in the first 5 years of follow-up to 1.73 in the period following the 5 years. Future studies are required to confirm whether a longer follow-up period would further reduce this average ratio. For the splenectomy group, the overall HR of developing empyema was 2.89 after adjusting for age, sex and comorbidities, which was identified from previous literature. The risk of empyema following splenectomy remains high despite the absence of these comorbidities.


2018 ◽  
Vol 14 (1) ◽  
pp. 61-68 ◽  
Author(s):  
Maria Carlsson ◽  
Tom Wilsgaard ◽  
Stein Harald Johnsen ◽  
Liv-Hege Johnsen ◽  
Maja-Lisa Løchen ◽  
...  

Background Studies on the relationship between temporal trends in risk factors and incidence rates of intracerebral hemorrhage are scarce. Aims To analyze temporal trends in risk factors and incidence rates of intracerebral hemorrhage using individual data from a population-based study. Methods We included 28,167 participants of the Tromsø Study enrolled between 1994 and 2008. First-ever intracerebral hemorrhages were registered through 31 December 2013. Hazard ratios (HRs) for intracerebral hemorrhage were analyzed by Cox proportional hazards models, risk factor levels over time by generalized estimating equations, and incidence rate ratios (IRR) by Poisson regression. Results We registered 219 intracerebral hemorrhages. Age, male sex, systolic blood pressure (BP), diastolic BP, and hypertension were associated with intracerebral hemorrhage. Hypertension was more strongly associated with non-lobar intracerebral hemorrhage (HR 5.08, 95% CI 2.86–9.01) than lobar intracerebral hemorrhage (HR 1.91, 95% CI 1.12–3.25). In women, incidence decreased significantly (IRR 0.46, 95% CI 0.23–0.90), driven by a decrease in non-lobar intracerebral hemorrhage. Incidence rates in men remained stable (IRR 1.27, 95% CI 0.69–2.31). BP levels were lower and decreased more steeply in women than in men. The majority with hypertension were untreated, and a high proportion of those treated did not reach treatment goals. Conclusions We observed a significant decrease in intracerebral hemorrhage incidence in women, but not in men. A steeper BP decrease in women may have contributed to the diverging trends. The high proportion of untreated and sub-optimally treated hypertension calls for improved strategies for prevention of intracerebral hemorrhage.


10.2196/15911 ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. e15911
Author(s):  
Ahmed Abdulaal ◽  
Chanpreet Arhi ◽  
Paul Ziprin

Background The United Kingdom has lower survival figures for all types of cancers compared to many European countries despite similar national expenditures on health. This discrepancy may be linked to long diagnostic and treatment delays. Objective The aim of this study was to determine whether delays experienced by patients with colorectal cancer (CRC) affect their survival. Methods This observational study utilized the Somerset Cancer Register to identify patients with CRC who were diagnosed on the basis of positive histology findings. The effects of diagnostic and treatment delays and their subdivisions on outcomes were investigated using Cox proportional hazards regression. Kaplan-Meier plots were used to illustrate group differences. Results A total of 648 patients (375 males, 57.9% males) were included in this study. We found that neither diagnostic delay nor treatment delay had an effect on the overall survival in patients with CRC (χ23=1.5, P=.68; χ23=0.6, P=.90, respectively). Similarly, treatment delays did not affect the outcomes in patients with CRC (χ23=5.5, P=.14). The initial Cox regression analysis showed that patients with CRC who had short diagnostic delays were less likely to die than those experiencing long delays (hazard ratio 0.165, 95% CI 0.044-0.616; P=.007). However, this result was nonsignificant following sensitivity analysis. Conclusions Diagnostic and treatment delays had no effect on the survival of this cohort of patients with CRC. The utility of the 2-week wait referral system is therefore questioned. Timely screening with subsequent early referral and access to diagnostics may have a more beneficial effect.


2021 ◽  
Author(s):  
Yuxin Ding ◽  
Runyi Jiang ◽  
Yuhong Chen ◽  
Jing Jing ◽  
Xiaoshuang Yang ◽  
...  

Abstract Background Previous studies have reported poorer survival in head and neck melanoma (HNM) than in body melanoma (BM). Individualized tools to predict the prognosis for patients with HNM or BM remain insufficient. Objectives To compare the characteristics of HNM and BM, and to establish and validate the nomograms for predicting the 3-, 5- and 10-year survival of patients with HNM or BM. Methods We studied patients with HNM or BM from 2004 to 2015 in the Surveillance, Epidemiology, and End Results (SEER) database. The HNM group and BM group were randomly divided into training and validation cohorts. We performed the Kaplan-Meier method for survival analysis, and used multivariate Cox proportional hazards models to identify independent prognostic factors. Nomograms for HNM patients or BM patients were developed via the rms package, and were measured by the concordance index (C-index), the area under the receiver operator characteristic (ROC) curve (AUC) and calibration plots. Results Of 70605 patients acquired, 21% (n = 15071) had HNM and 79% (n = 55534) had BM. The HNM group contained more older patients, male patients, and lentigo maligna melanoma, and more frequently had thicker tumors and metastases than the BM group. The 5-year CSS and OS rates were 88.1 ± 0.3% and 74.4 ± 0.4% in the HNM group and 92.5 ± 0.1% and 85.8 ± 0.2% in the BM group, respectively. Eight independent prognostic factors (age, sex, histology, thickness, ulceration, stage, metastases, and surgery) were identified to construct nomograms for HNM patients or BM patients. The performance of the nomograms were excellent: the C-index of the CSS prediction for HNM patients and BM patients in the training cohort were 0.839 and 0.895, respectively; in the validation cohort, they were 0.848 and 0.888, respectively; the AUCs for the 3-, 5- and 10-year CSS rates of HNM were 0.871, 0.865 and 0.854 (training), and 0.881, 0.879 and 0.861 (validation), respectively; of BM, the AUCs were 0.924, 0.918 and 0.901 (training) and 0.916, 0.908 and 0.893 (validation), respectively; and the calibration plots showed great consistency. Conclusions The characteristics of HNM and BM are heterogeneous, and we constructed and validated specific nomograms as practical prognostic tools for patients with HNM or BM.


Perfusion ◽  
2020 ◽  
Vol 35 (8) ◽  
pp. 847-852
Author(s):  
Wei-Syun Hu ◽  
Cheng-Li Lin

Objective: We seek to characterize the association between atrial fibrillation and irritable bowel syndrome. Methods: We identify 11,642 cases (atrial fibrillation) and 46,487 sex-, age-, and index year–matched controls (non-atrial fibrillation) from Longitudinal Health Insurance Database. Kaplan–Meier, Cox proportional hazards regression methods and competing risk analysis methods were used to assess the association of atrial fibrillation with outcome of irritable bowel syndrome. Results: After adjustment for gender, age, comorbidities and medications, patients with atrial fibrillation had a significant higher risk (adjusted hazard ratio = 1.12, p < 0.01) to develop irritable bowel syndrome than patients without atrial fibrillation. Compared to participants without atrial fibrillation, those with atrial fibrillation had 1.13-fold (p < 0.05) and 1.11-fold (p < 0.05) risk of irritable bowel syndrome in female and male subgroup, respectively. Among subjects aged ≥65 years, those with AF had 1.11-fold risk of irritable bowel syndrome than non-AF cohort (P < 0.01). Among participants with any one of the comorbidities, those with atrial fibrillation had 1.10-fold risk of irritable bowel syndrome than non-atrial fibrillation cohort (p < 0.05). Conclusion: We report that the presence of atrial fibrillation is associated with greater incidence of irritable bowel syndrome and the association is stronger among female gender, age 65 years or above, and with comorbidities.


2018 ◽  
Vol 36 (4_suppl) ◽  
pp. 791-791
Author(s):  
Rahul Neal Prasad ◽  
Joshua Elson ◽  
Jordan Kharofa

791 Background: Chemoradiation allows for organ preservation in patients with anal cancer, but patients with large tumors (T3/T4) continue to have high rates of locoregional recurrence. With conformal radiation techniques, there is interest in dose escalation to improve local control for large tumors. Methods: The National Cancer Database (NCDB) was used to identify patients with anal cancer from 2004-2013 with tumors > 5 cm in size. Adult patients with T3 or T4 squamous cell carcinoma who received definitive chemoradiation were included. Patients with prior resection were excluded. Higher dose was defined as > than or equal to 5940 Gy. Statistical analyses were performed using logistic regression, Kaplan-Meier, and Cox proportional hazards for overall survival (OS). Results: In total, 1349 patients were analyzed with 412 (30.5%) receiving higher dose radiation therapy (RT). Dose in the higher group ranged from 5940 – 7000 Gy. 5-year OS was 58% and 60% for higher and lower dose RT, respectively. On univariate analysis, higher dose RT (HR 0.998, CI 0.805 - 1.239, p = 0.9887) was not associated with a change in OS but older age (HR 1.484, CI 1.193 - 1.844, p = 0.0004), male sex (HR 1.660, CI 1.355 - 2.033, p < 0.0001), comorbidities (HR 1.496, CI 1.183 - 1.893, p = 0.0008), and long RT (HR 1.248, CI 1.016 - 1.533, p = 0.0347) were significantly associated with decreased OS. The results of multivariate analysis are shown in the Table. Conclusions: There was no observed difference in OS for dose escalation of anal cancers > 5 cm in this population-based analysis, but differences in local control cannot be assessed through the NCDB. Males, elderly, and comorbid patients were particularly high-risk populations with poor chemoradiation survival outcomes. Reducing treatment breaks is important for improving outcomes. Whether dose escalation of large tumors may improve local control and colostomy-free survival remains an important question and is the subject of ongoing trials. [Table: see text]


Sign in / Sign up

Export Citation Format

Share Document