Fecal Diversion for Perianal Crohn Disease in the Era of Biologic Therapies: A Multicenter Study

Author(s):  
Jeffrey D McCurdy ◽  
Jacqueline Reid ◽  
Russell Yanofsky ◽  
Vigigah Sinnathamby ◽  
Edgar Medawar ◽  
...  

Abstract Background The natural history of perianal Crohn disease (PCD) after fecal diversion in the era of biologics is poorly understood. We assessed clinical and surgical outcomes after fecal diversion for medically refractory PCD and determined the impact of biologics. Methods We performed a retrospective, multicenter study from 1999 to 2020. Patients who underwent fecal diversion for refractory PCD were stratified by diversion type (ostomy with or without proctectomy). Times to clinical and surgical outcomes were estimated using Kaplan-Meier methods, and the association with biologics was assessed using multivariable Cox proportional hazards models. Results Eighty-two patients, from 3 academic institutions, underwent a total of 97 fecal diversions: 68 diversions without proctectomy and 29 diversions with proctectomy. Perianal healing occurred more commonly after diversion with proctectomy than after diversion without proctectomy (83% vs 53%; P = 0.021). Among the patients who had 68 diversions without proctectomy, with a median follow-up of 4.9 years post-diversion (interquartile range, 1.66-10.19), 37% had sustained healing, 31% underwent surgery to restore bowel continuity, and 22% underwent proctectomy. Ostomy-free survival occurred in 21% of patients. Biologics were independently associated with avoidance of proctectomy (hazard ratio, 0.32; 95% confidence interval, 0.11-0.98) and surgery to restore bowel continuity (hazard ratio, 3.10; 95% confidence interval, 1.02-9.37), but not fistula healing. Conclusions In this multicenter study, biologics were associated with bowel restoration and avoidance of proctectomy after fecal diversion without proctectomy for PCD; however, a minority of patients achieved sustained fistula healing after initial fecal diversion or after bowel restoration. These results highlight the refractory nature of PCD.

2021 ◽  
pp. 000486742110096
Author(s):  
Oleguer Plana-Ripoll ◽  
Patsy Di Prinzio ◽  
John J McGrath ◽  
Preben B Mortensen ◽  
Vera A Morgan

Introduction: An association between schizophrenia and urbanicity has long been observed, with studies in many countries, including several from Denmark, reporting that individuals born/raised in densely populated urban settings have an increased risk of developing schizophrenia compared to those born/raised in rural settings. However, these findings have not been replicated in all studies. In particular, a Western Australian study showed a gradient in the opposite direction which disappeared after adjustment for covariates. Given the different findings for Denmark and Western Australia, our aim was to investigate the relationship between schizophrenia and urbanicity in these two regions to determine which factors may be influencing the relationship. Methods: We used population-based cohorts of children born alive between 1980 and 2001 in Western Australia ( N = 428,784) and Denmark ( N = 1,357,874). Children were categorised according to the level of urbanicity of their mother’s residence at time of birth and followed-up through to 30 June 2015. Linkage to State-based registers provided information on schizophrenia diagnosis and a range of covariates. Rates of being diagnosed with schizophrenia for each category of urbanicity were estimated using Cox proportional hazards models adjusted for covariates. Results: During follow-up, 1618 (0.4%) children in Western Australia and 11,875 (0.9%) children in Denmark were diagnosed with schizophrenia. In Western Australia, those born in the most remote areas did not experience lower rates of schizophrenia than those born in the most urban areas (hazard ratio = 1.02 [95% confidence interval: 0.81, 1.29]), unlike their Danish counterparts (hazard ratio = 0.62 [95% confidence interval: 0.58, 0.66]). However, when the Western Australian cohort was restricted to children of non-Aboriginal Indigenous status, results were consistent with Danish findings (hazard ratio = 0.46 [95% confidence interval: 0.29, 0.72]). Discussion: Our study highlights the potential for disadvantaged subgroups to mask the contribution of urban-related risk factors to risk of schizophrenia and the importance of stratified analysis in such cases.


2021 ◽  
pp. 219256822199630
Author(s):  
Narihito Nagoshi ◽  
Kota Watanabe ◽  
Masaya Nakamura ◽  
Morio Matsumoto ◽  
Nan Li ◽  
...  

Study Design: Retrospective multicenter study. Objectives: To evaluate the surgical outcomes of cervical ossification of the posterior longitudinal ligament (OPLL) in diabetes mellitus (DM) patients. Methods: Approximately 253 cervical OPLL patients who underwent surgical decompression with or without fixation were registered at 4 institutions in 3 Asian countries. They were followed up for at least 2 years. Demographics, imaging, and surgical information were collected, and cervical Japanese Orthopaedic Association (JOA) scores and the visual analog scale (VAS) for the neck were used for evaluation. Results: Forty-seven patients had DM, showing higher hypertension and cardiovascular disease prevalence. Although they presented worse preoperative JOA scores than non-DM patients (10.5 ± 3.1 vs. 11.8 ± 3.2; P = 0.01), the former showed comparable neurologic recovery at the final follow-up (13.9 ± 2.9 vs. 14.2 ± 2.6; P = 0.41). No correlation was noted between the hemoglobin A1c level in the DM group and the pre- and postoperative JOA scores. No significant difference was noted in VAS scores between the groups at pre- and postsurgery. Regarding perioperative complications, DM patients presented a higher C5 palsy frequency (14.9% vs. 5.8%; P = 0.04). A similar trend was observed when surgical procedure was limited to laminoplasty. Conclusions: This is the first multicenter Asian study to evaluate the impact of DM on cervical OPLL patients. Surgical results were favorable even in DM cases, regardless of preoperative hemoglobin A1c levels or operative procedures. However, caution is warranted for the occurrence of C5 palsy after surgery.


Neurosurgery ◽  
2015 ◽  
Vol 77 (6) ◽  
pp. 880-887 ◽  
Author(s):  
Eric J. Heyer ◽  
Joanna L. Mergeche ◽  
Shuang Wang ◽  
John G. Gaudet ◽  
E. Sander Connolly

BACKGROUND: Early cognitive dysfunction (eCD) is a subtle form of neurological injury observed in ∼25% of carotid endarterectomy (CEA) patients. Statin use is associated with a lower incidence of eCD in asymptomatic patients having CEA. OBJECTIVE: To determine whether eCD status is associated with worse long-term survival in patients taking and not taking statins. METHODS: This is a post hoc analysis of a prospective observational study of 585 CEA patients. Patients were evaluated with a battery of neuropsychometric tests before and after surgery. Survival was compared for patients with and without eCD stratifying by statin use. At enrollment, 366 patients were on statins and 219 were not. Survival was assessed by using Kaplan-Meier methods and multivariable Cox proportional hazards models. RESULTS: Age ≥75 years (P = .003), diabetes mellitus (P < .001), cardiac disease (P = .02), and statin use (P = .014) are significantly associated with survival univariately (P < .05) by use of the log-rank test. By Cox proportional hazards model, eCD status and survival adjusting for univariate factors within statin and nonstatin use groups suggested a significant effect by association of eCD on survival within patients not taking statin (hazard ratio, 1.61; 95% confidence interval, 1.09–2.40; P = .018), and no significant effect of eCD on survival within patients taking statin (hazard ratio, 0.98; 95% confidence interval, 0.59–1.66; P = .95). CONCLUSION: eCD is associated with shorter survival in patients not taking statins. This finding validates eCD as an important neurological outcome and suggests that eCD is a surrogate measure for overall health, comorbidity, and vulnerability to neurological insult.


2019 ◽  
Vol 26 (14) ◽  
pp. 1510-1518 ◽  
Author(s):  
Claudia T Lissåker ◽  
Fredrika Norlund ◽  
John Wallert ◽  
Claes Held ◽  
Erik MG Olsson

Background Patients with symptoms of depression and/or anxiety – emotional distress – after a myocardial infarction (MI) have been shown to have worse prognosis and increased healthcare costs. However, whether specific subgroups of patients with emotional distress are more vulnerable is less well established. The purpose of this study was to identify the association between different patterns of emotional distress over time with late cardiovascular and non-cardiovascular mortality among first-MI patients aged <75 years in Sweden. Methods We utilized data on 57,602 consecutive patients with a first-time MI from the national SWEDEHEART registers. Emotional distress was assessed using the anxiety/depression dimension of the European Quality of Life Five Dimensions questionnaire two and 12 months after the MI, combined into persistent (emotional distress at both time-points), remittent (emotional distress at the first follow-up only), new (emotional distress at the second-follow up only) or no distress. Data on cardiovascular and non-cardiovascular mortality were obtained until the study end-time. We used multiple imputation to create complete datasets and adjusted Cox proportional hazards models to estimate hazard ratios. Results Patients with persistent emotional distress were more likely to die from cardiovascular (hazard ratio: 1.46, 95% confidence interval: 1.16, 1.84) and non-cardiovascular causes (hazard ratio: 1.54, 95% confidence interval: 1.30, 1.82) than those with no distress. Those with remittent emotional distress were not statistically significantly more likely to die from any cause than those without emotional distress. Discussion Among patients who survive 12 months, persistent, but not remittent, emotional distress was associated with increased cardiovascular and non-cardiovascular mortality. This indicates a need to identify subgroups of individuals with emotional distress who may benefit from further assessment and specific treatment.


2019 ◽  
Vol 40 (Supplement_1) ◽  
Author(s):  
K K Lee ◽  
A V Ferry ◽  
A Anand ◽  
F E Strachan ◽  
A R Chapman ◽  
...  

Abstract Background/Introduction Major disparities between women and men in the diagnosis, management and outcome of acute coronary syndrome are well recognised. Whether sex-specific diagnostic thresholds for myocardial infarction will address these differences is uncertain. Purpose To evaluate the impact of implementing a high-sensitivity cardiac troponin I (hs-cTnI) assay with sex-specific diagnostic thresholds for myocardial infarction in women and men with suspected acute coronary syndrome. Methods In a stepped-wedge, cluster-randomized controlled trial across ten hospitals we evaluated the implementation of a hs-cTnI assay in 48,282 (47% women) consecutive patients with suspected acute coronary syndrome. During a validation phase the hs-cTnI assay results were suppressed and a contemporary cTnI assay with a single threshold was used to guide care. Myocardial injury was defined as any hs-cTnI concentration >99th centile of 16 ng/L in women and 34 ng/L in men. The primary outcome was myocardial infarction after the initial presentation or cardiovascular death at 1 year. In this prespecified analysis, we evaluated outcomes in men and women before and after implementation of the hs-cTnI assay. Results Use of the hs-cTnI assay with sex-specific thresholds increased myocardial injury in women by 42% (from 3,521 (16%) to 4,991 (22%)) and by 6% in men (from 5,068 (20%) to 5,369 (21%)). Whilst treatment increased in both sexes, women with myocardial injury remained less likely than men to undergo coronary revascularisation (15% versus34%), or to receive dual anti-platelet (26% versus43%), statin (16% versus26%) or other preventative therapies (P<0.001 for all). The primary outcome occurred in 18% (369/2,072) and 17% (488/2,919) of women with myocardial injury during the validation and implementation phase respectively (adjusted hazard ratio 1.11, 95% confidence interval 0.92 to 1.33), compared to 18% (370/2,044) and 15% (513/3,325) of men (adjusted hazard ratio 0.85, 95% confidence interval 0.71 to 1.01). Patient management Conclusion Use of sex-specific thresholds identified five-times more additional women than men with myocardial injury, such that the proportion of women and men with myocardial injury is now similar. Despite this increase, women received approximately half the number of treatments for coronary artery disease as men and their outcomes were not improved. Acknowledgement/Funding The British Heart Foundation


2010 ◽  
Vol 2010 ◽  
pp. 1-9 ◽  
Author(s):  
Jason D. Pole ◽  
Cameron A. Mustard ◽  
Teresa To ◽  
Joseph Beyene ◽  
Alexander C. Allen

This study was designed to test the hypothesis that fetal exposure to corticosteroids in the antenatal period is an independent risk factor for the development of asthma in early childhood with little or no effect in later childhood. A population-based cohort study of all pregnant women who resided in Nova Scotia, Canada, and gave birth to a singleton fetus between 1989 and 1998 was undertaken. After a priori specified exclusions, 80,448 infants were available for analysis. Using linked health care utilization records, incident asthma cases developed after 36 months of age were identified. Extended Cox proportional hazards models were used to estimate hazard ratios while controlling for confounders. Exposure to corticosteroids during pregnancy was associated with a risk of asthma in childhood between 3–5 years of age: adjusted hazard ratio of 1.19 (95% confidence interval: 1.03, 1.39), with no association noted after 5 years of age: adjusted hazard ratio for 5–7 years was 1.06 (95% confidence interval: 0.86, 1.30) and for 8 or greater years was 0.74 (95% confidence interval: 0.54, 1.03). Antenatal steroid therapy appears to be an independent risk factor for the development of asthma between 3 and 5 years of age.


2020 ◽  
Vol 189 (10) ◽  
pp. 1096-1113 ◽  
Author(s):  
Shawn A Zamani ◽  
Kathleen M McClain ◽  
Barry I Graubard ◽  
Linda M Liao ◽  
Christian C Abnet ◽  
...  

Abstract Recent epidemiologic studies have examined the association of fish consumption with upper gastrointestinal cancer risk, but the associations with n-3 and n-6 polyunsaturated fatty acid (PUFA) subtypes remain unclear. Using the National Institutes of Health–AARP Diet and Health Study (United States, 1995–2011), we prospectively investigated the associations of PUFA subtypes, ratios, and fish with the incidence of head and neck cancer (HNC; n = 2,453), esophageal adenocarcinoma (EA; n = 855), esophageal squamous cell carcinoma (n = 267), and gastric cancer (cardia: n = 603; noncardia: n = 631) among 468,952 participants (median follow-up, 15.5 years). A food frequency questionnaire assessed diet. Multivariable-adjusted hazard ratios were estimated using Cox proportional hazards regression. A Benjamini-Hochberg (BH) procedure was used for false-discovery control. Long-chain n-3 PUFAs were associated with a 20% decreased HNC and EA risk (for HNC, quintile5 vs. 1 hazard ratio = 0.81, 95% confidence interval: 0.71, 0.92, and BH-adjusted Ptrend = 0.001; and for EA, quintile5 vs. 1 hazard ratio = 0.79, 95% confidence interval: 0.64, 0.98, and BH-adjusted Ptrend = 0.1). Similar associations were observed for nonfried fish but only for high intake. Further, the ratio of long-chain n-3:n-6 was associated with a decreased HNC and EA risk. No consistent associations were observed for gastric cancer. Our results indicate that dietary long-chain n-3 PUFA and nonfried fish intake are associated with lower HNC and EA risk.


2019 ◽  
Vol 14 (7) ◽  
pp. 994-1001 ◽  
Author(s):  
Eli Farhy ◽  
Clarissa Jonas Diamantidis ◽  
Rebecca M. Doerfler ◽  
Wanda J. Fink ◽  
Min Zhan ◽  
...  

Background and objectivesPoor disease recognition may jeopardize the safety of CKD care. We examined safety events and outcomes in patients with CKD piloting a medical-alert accessory intended to improve disease recognition and an observational subcohort from the same population.Design, setting, participants, & measurementsWe recruited 350 patients with stage 2–5 predialysis CKD. The first (pilot) 108 participants were given a medical-alert accessory (bracelet or necklace) indicating the diagnosis of CKD and displaying a website with safe CKD practices. The subsequent (observation) subcohort (n=242) received usual care. All participants underwent annual visits with ascertainment of patient-reported events (class 1) and actionable safety findings (class 2). Secondary outcomes included 50% GFR reduction, ESKD, and death. Cox proportional hazards assessed the association of the medical-alert accessory with outcomes.ResultsMedian follow-up of pilot and observation subcohorts were 52 (interquartile range, 44–63) and 37 (interquartile range, 27–47) months, respectively. The frequency of class 1 and class 2 safety events reported at annual visits was not different in the pilot versus observation group, with 108.7 and 100.6 events per 100 patient-visits (P=0.13), and 38.3 events and 41.2 events per 100 patient visits (P=0.23), respectively. The medical-alert accessory was associated with lower crude and adjusted rate of ESKD versus the observation group (hazard ratio, 0.42; 95% confidence interval, 0.20 to 0.89; and hazard ratio, 0.38; 95% confidence interval, 0.16 to 0.94, respectively). The association of the medical-alert accessory with the composite endpoint of ESKD or 50% reduction GFR was variable over time but appeared to have an early benefit (up to 23 months) with its use. There was no significant difference in incidence of hospitalization, death, or a composite of all outcomes between medical-alert accessory users and the observational group.ConclusionsThe medical-alert accessory was not associated with incidence of safety events but was associated with a lower rate of ESKD relative to usual care.


2020 ◽  
Vol 4 (5) ◽  
Author(s):  
Marianna V Papageorge ◽  
Benjamin J Resio ◽  
Andres F Monsalve ◽  
Maureen Canavan ◽  
Ranjan Pathak ◽  
...  

Abstract Background The Centers for Medicare and Medicaid Services (CMS) developed risk-adjusted “Star Ratings,” which serve as a guide for patients to compare hospital quality (1 star = lowest, 5 stars = highest). Although star ratings are not based on surgical care, for many procedures, surgical outcomes are concordant with star ratings. In an effort to address variability in hospital mortality after complex cancer surgery, the use of CMS Star Ratings to identify the safest hospitals was evaluated. Methods Patients older than 65 years of age who underwent complex cancer surgery (lobectomy, colectomy, gastrectomy, esophagectomy, pancreaticoduodenectomy) were evaluated in CMS Medicare Provider Analysis and Review files (2013-2016). The impact of reassignment was modeled by applying adjusted mortality rates of patients treated at 5-star hospitals to those at 1-star hospitals (Peters-Belson method). Results There were 105 823 patients who underwent surgery at 3146 hospitals. The 90-day mortality decreased with increasing star rating (1 star = 10.4%, 95% confidence interval [CI] = 9.8% to 11.1%; and 5 stars = 6.4%, 95% CI = 6.0% to 6.8%). Reassignment of patients from 1-star to 5-star hospitals (7.8% of patients) was predicted to save 84 Medicare beneficiaries each year. This impact varied by procedure (colectomy = 47 lives per year; gastrectomy = 5 lives per year). Overall, 2189 patients would have to change hospitals each year to improve outcomes (26 patients moved to save 1 life). Conclusions Mortality after complex cancer surgery is associated with CMS Star Rating. However, the use of CMS Star Ratings by patients to identify the safest hospitals for cancer surgery would be relatively inefficient and of only modest impact.


Neurosurgery ◽  
2017 ◽  
Vol 81 (6) ◽  
pp. 935-948 ◽  
Author(s):  
Joan Margaret O’Donnell ◽  
Michael Kerin Morgan ◽  
Gillian Z Heller

Abstract BACKGROUND The evidence for the risk of seizures following surgery for brain arteriovenous malformations (bAVM) is limited. OBJECTIVE To determine the risk of seizures after discharge from surgery for supratentorial bAVM. METHODS A prospectively collected cohort database of 559 supratentorial bAVM patients (excluding patients where surgery was not performed with the primary intention of treating the bAVM) was analyzed. Cox proportional hazards regression models (Cox regression) were generated assessing risk factors, a Receiver Operator Characteristic curve was generated to identify a cut-point for size and Kaplan–Meier life table curves created to identify the cumulative freedom from postoperative seizure. RESULTS Preoperative histories of more than 2 seizures and increasing maximum diameter (size, cm) of bAVM were found to be significantly (P &lt; .01) associated with the development of postoperative seizures and remained significant in the Cox regression (size as continuous variable: P = .01; hazard ratio: 1.2; 95% confidence interval: 1.0-1.3; more than 2 seizures: P = .02; hazard ratio: 2.1; 95% confidence interval: 1.1-3.8). The cumulative risk of first seizure after discharge from hospital following resection surgery for all patients with bAVM was 5.8% and 18% at 12 mo and 7 yr, respectively. The 7-yr risk of developing postoperative seizures ranged from 11% for patients with bAVM ≤4 cm and with 0 to 2 preoperative seizures, to 59% for patients with bAVM &gt;4 cm and with &gt;2 preoperative. CONCLUSION The risk of seizures after discharge from hospital following surgery for bAVM increases with the maximum diameter of the bAVM and a patient history of more than 2 preoperative seizures.


Sign in / Sign up

Export Citation Format

Share Document