NELA.03Laparoscopy in Emergency Surgery is Associated with Lower Risk-Adjusted Mortality

2021 ◽  
Vol 108 (Supplement_7) ◽  
Author(s):  
Alexander Darbyshire ◽  
Ina Kostakis ◽  
Phil Pucher ◽  
David Prytherch ◽  
Simon Toh ◽  
...  

Abstract Aims To compare risk-adjusted outcomes after emergency intestinal surgery by operative approach. Methods Data from December 2013-November 2018 was retrieved from the NELA national database. Complete data on 102,154 patients with P-POSSUM was available, and 47,667 had NELA score. AUROC curves were calculated to assess model discrimination (c-statistic), and calibration plots to visualise agreement between predicted and observed mortality.  Standardised Mortality Ratio's (SMR) were calculated for the total cohort and by operative approach. Operative approach was divided into: laparotomy, completed laparoscopically, converted to open and lap assisted. Results Both P-POSSUM and NELA score displayed good discrimination for total cohort and by operative approach (P-POSSUM c-statistic=0.801-0.815; NELA score c-statistic=0.851-0.880).  Calibration plots demonstrated that P-POSSUM was highly accurate up to 20% mortality, after which it substantially over-predicted mortality.  NELA score was highly accurate up to 25% mortality after which it slightly under-predicted. Overall SMR of observed vs expected deaths was 0.77 using P-POSSUM, 0.8 for laparotomy and 0.46 for laparoscopy.  Restricting cases to < 10% predicted mortality (n = 65,000), overall SMR improved (0.9) and was considerably lower for cases completed laparoscopically (0.41) compared to open (0.97).  Using NELA scores of < 10% predicted mortality (n = 27,000) had similar overall SMR (0.96), with cases completed laparoscopically displaying much lower SMR (0.61) compared to laparotomy (1.0). Conclusions SMR's calculated using P-POSSUM and NELA score have demonstrated that laparoscopy has significantly lower observed vs expected mortality rate compared to laparotomy. This raises the question of why laparoscopy is associated with reduced mortality and should operative approach be included in risk models?

Author(s):  
Jason H Wasfy ◽  
Kenneth Rosenfield ◽  
Daniel Shivapour ◽  
Katya Zelevinsky ◽  
Rahul Sakhuja ◽  
...  

Background: The Affordable Care Act (ACA) creates incentives within Medicare for hospitals that minimize readmissions shortly after discharge. Percutaneous coronary intervention (PCI) has among the highest rates of 30-day all-cause readmission. We developed and validated a prediction model to assist clinicians in identifying patients at high risk for 30-day readmission after discharge for PCI. Methods: We included all PCI admissions in non-federal hospitals in Massachusetts between October 1, 2005 and September 30, 2008. Readmissions within 30-days of discharge were identified via linkage with Massachusetts inpatient claims files. Within a 2/3 random sample, we developed 2 separate multivariable models to predict all-cause 30-day readmission, one incorporating only variables known prior to cardiac catheterization (Pre-PCI model), and a second incorporating variables known at discharge, including PCI-related complications and discharge disposition (Discharge model). In order to facilitate clinical use of the model via a web-based application, less influential variables were eliminated via stepwise selection, while retaining 95% of the predicted variability in the complete models. Models were validated within the remaining 1/3 sample, and model discrimination and calibration were assessed. Readmissions for staged PCIs were not considered as a readmission. Results: Of 36060 PCI patients surviving to discharge, 3760 (10.4%) were readmitted within 30 days. In the pre-PCI model, significant independent predictors of readmission included history of heart failure as well as heart failure status at time of PCI, gender, chronic lung disease, worse renal function, insurance status, admission status, previous CABG, peripheral vascular disease, presence of cardiogenic shock, and age. Additional predictors of readmission in the discharge model included length of stay, bleeding or vascular complications, use of drug-eluting stents, previous PCI, diabetes status, race, discharge location, and beta blocker being prescribed at discharge (Figure 1). Model discrimination was moderate for the pre-PCI model (C statistic = 0.67) and not substantially improved by the addition of post-PCI variables (C statistic = 0.69). Both models were well-calibrated within the validation dataset (Hosmer-Lemeshow goodness of fit P = NS for both). Conclusions: These validated models, developed in a large and broadly generalizable population, can be used to identify patients at high risk for readmission after PCI. Such a model could be used to target high risk patients for interventions to prevent readmission.


2017 ◽  
Vol 46 (4) ◽  
pp. 276-284 ◽  
Author(s):  
Pierre-Jean Saulnier ◽  
Brad P. Dieter ◽  
Stephanie K. Tanamas ◽  
Sterling M. McPherson ◽  
Kevin M. Wheelock ◽  
...  

Background: Serum amyloid A (SAA) induces inflammation and apoptosis in kidney cells and is found to be causing the pathologic changes that are associated with diabetic kidney disease (DKD). Higher serum SAA concentrations were previously associated with increased risk of end-stage renal disease (ESRD) and death in persons with type 2 diabetes and advanced DKD. We explored the prognostic value of SAA in American Indians with type 2 diabetes without DKD or with early DKD. Methods: SAA concentration was measured in serum samples obtained at the start of follow-up. Multivariate proportional hazards models were employed to examine the magnitude of the risk of ESRD or death across tertiles of SAA concentration after adjustment for traditional risk factors. The C statistic was used to assess the additional predictive value of SAA relative to traditional risk factors. Results: Of 256 participants (mean ± SD glomerular filtration rate [iothalamate] = 148 ± 45 mL/min, and median [interquartile range] urine albumin/creatinine = 39 [14-221] mg/g), 76 developed ESRD and 125 died during a median follow-up period of 15.2 and 15.7 years, respectively. After multivariable proportional hazards regression, participants in the 2 highest SAA tertiles together exhibited a 53% lower risk of ESRD (hazard ratio [HR] 0.47, 95% CI 0.29-0.78), and a 30% lower risk of death (HR 0.70, 95% CI 0.48-1.02), compared with participants in the lowest SAA tertile, although the lower risk of death was not statistically significant. Addition of SAA to the ESRD model increased the C statistic from 0.814 to 0.815 (p = 0.005). Conclusions: Higher circulating SAA concentration is associated with a reduced risk of ESRD in American Indians with type 2 diabetes.


Author(s):  
Theodoros Evgeniou ◽  
Mathilde Fekom ◽  
Anton Ovchinnikov ◽  
Raphael Porcher ◽  
Camille Pouchol ◽  
...  

Background: In early May 2020, following social distancing measures due to COVID-19, governments consider relaxing lock-down. We combined individual clinical risk predictions with epidemic modelling to examine simulations of risk based differential isolation and exit policies. Methods: We extended a standard susceptible-exposed-infected-removed (SEIR) model to account for personalised predictions of severity, defined by the risk of an individual needing intensive care if infected, and simulated differential isolation policies using COVID-19 data and estimates in France as of early May 2020. We also performed sensitivity analyses. The framework may be used with other epidemic models, with other risk predictions, and for other epidemic outbreaks. Findings: Simulations indicated that, assuming everything else the same, an exit policy considering clinical risk predictions starting on May 11, as planned by the French government, could enable to immediately relax restrictions for an extra 10% (6 700 000 people) or more of the lowest-risk population, and consequently relax the restrictions on the remaining population significantly faster -- while abiding to the current ICU capacity. Similar exit policies without risk predictions would exceed the ICU capacity by a multiple. Sensitivity analyses showed that when the assumed percentage of severe patients among the population decreased, or the prediction model discrimination improved, or the ICU capacity increased, policies based on risk models had a greater impact on the results of epidemic simulations. At the same time, sensitivity analyses also showed that differential isolation policies require the higher risk individuals to comply with recommended restrictions. In general, our simulations demonstrated that risk prediction models could improve policy effectiveness, keeping everything else constant. Interpretation: Clinical risk prediction models can inform new personalised isolation and exit policies, which may lead to both safer and faster outcomes than what can be achieved without such prediction models.


2020 ◽  
Vol 23 (5) ◽  
pp. E668-E672
Author(s):  
Tiao Lv ◽  
Yinghong Zhang ◽  
Wen Zhang ◽  
Liu Hu ◽  
Guozhen Liu ◽  
...  

Objective: To explore the value of a rapid risk predictive model for the readmission of patients after CABG in China. Methods: The rapid predictive model of readmission risk was translated into Chinese, and then validated with data from 758 patients who underwent CABG in Wuhan Asian Heart Hospital from January 2018 to June 2019. The discrimination was tested by area under the ROC curve (AUC), and the calibration was tested by Hosmer-Lemeshow test. Results: The rapid risk predictive model for readmission showed good discrimination and calibration in Chinese CABG patients (The area under ROC curve c-statistic: 0.704, 95% CI: 0.614-0.794; Hosmer-Lemeshow test: P = .955). Conclusion: The rapid readmission risk predictive model can be used in Chinese CABG patients soon after admission.


Author(s):  
Berend R. Beumer ◽  
Kosei Takagi ◽  
Bastiaan Vervoort ◽  
Stefan Buettner ◽  
Yuzo Umeda ◽  
...  

Abstract Background This study aimed to assess the performance of the pre- and postoperative early recurrence after surgery for liver tumor (ERASL) models at external validation. Prediction of early hepatocellular carcinoma (HCC) recurrence after resection is important for individualized surgical management. Recently, the preoperative (ERASL-pre) and postoperative (ERASL-post) risk models were proposed based on patients from Hong Kong. These models showed good performance although they have not been validated to date by an independent research group. Methods This international cohort study included 279 patients from the Netherlands and 392 patients from Japan. The patients underwent first-time resection and showed a diagnosis of HCC on pathology. Performance was assessed according to discrimination (concordance [C] statistic) and calibration (correspondence between observed and predicted risk) with recalibration in a Weibull model. Results The discriminatory power of both models was lower in the Netherlands than in Japan (C statistic, 0.57 [95% confidence interval {CI} 0.52–0.62] vs 0.69 [95% CI 0.65–0.73] for the ERASL-pre model and 0.62 [95% CI 0.57–0.67] vs 0.70 [95% CI 0.66–0.74] for the ERASL-post model), whereas their prognostic profiles were similar. The predictions of the ERASL models were systematically too optimistic for both cohorts. Recalibrated ERASL models improved local applicability for both cohorts. Conclusions The discrimination of ERASL models was poorer for the Western patients than for the Japanese patients, who showed good performance. Recalibration of the models was performed, which improved the accuracy of predictions. However, in general, a model that explains the East–West difference or one tailored to Western patients still needs to be developed.


2020 ◽  
Vol 7 ◽  
pp. 205435812095328
Author(s):  
Paul E. Ronksley ◽  
James P. Wick ◽  
Meghan J. Elliott ◽  
Robert G. Weaver ◽  
Brenda R. Hemmelgarn ◽  
...  

Background: Approximately 10% of emergency department (ED) visits among dialysis patients are for conditions that could potentially be managed in outpatient settings, such as hyperkalemia. Objective: Using population-based data, we derived and internally validated a risk score to identify hemodialysis patients at increased risk of hyperkalemia-related ED events. Design: Retrospective cohort study. Setting: Ten in-center hemodialysis sites in southern Alberta, Canada. Patients: All maintenance hemodialysis patients (≥18 years) between March 2009 and March 2017. Measurements: Predictors of hyperkalemia-related ED events included patient demographics, comorbidities, health-system use, laboratory measurements, and dialysis information. The outcome of interest (hyperkalemia-related ED events) was defined by International Classification of Diseases (10th Revision; ICD-10) codes and/or serum potassium [K+] ≥6 mmol/L. Methods: Bootstrapped logistic regression was used to derive and internally validate a model of important predictors of hyperkalemia-related ED events. A point system was created based on regression coefficients. Model discrimination was assessed by an optimism-adjusted C-statistic and calibration by deciles of risk and calibration slope. Results: Of the 1533 maintenance hemodialysis patients in our cohort, 331 (21.6%) presented to the ED with 615 hyperkalemia-related ED events. A 9-point scale for risk of a hyperkalemia-related ED event was created with points assigned to 5 strong predictors based on their regression coefficients: ≥1 laboratory measurement of serum K+ ≥6 mmol/L in the prior 6 months (3 points); ≥1 Hemoglobin A1C [HbA1C] measurement ≥8% in the prior 12 months (1 point); mean ultrafiltration of ≥10 mL/kg/h over the preceding 2 weeks (2 points); ≥25 hours of cumulative time dialyzing over the preceding 2 weeks (1 point); and dialysis vintage of ≥2 years (2 points). Model discrimination (C-statistic: 0.75) and calibration were good. Limitations: Measures related to health behaviors, social determinants of health, and residual kidney function were not available for inclusion as potential predictors. Conclusions: While this tool requires external validation, it may help identify high-risk patients and allow for preventative strategies to avoid unnecessary ED visits and improve patient quality of life. Trial registration: Not applicable—observational study design.


Circulation ◽  
2017 ◽  
Vol 135 (suppl_1) ◽  
Author(s):  
Veerle Dam ◽  
N. C Onland-Moret ◽  
W. M Verschuren ◽  
Jolanda M Boer ◽  
Karel G Moons ◽  
...  

Introduction: The AHA guidelines for the Prevention of Cardiovascular Disease (CVD) in Women describe hypertensive disorders of pregnancy (HDP) as a failed stress test, which might unmask early CVD. An abundance of prediction models for CVD risk is available for the general population, but their validity in women with HDP is not established. Hypothesis: The prognostic performance of the Pooled Cohort Equations (PCE) is lower in women with HDP compared to women without HDP and recalibrating and refitting the model will improve the prognostic performance. Methods: Data were used from 27,339 women out of the MORGEN and PROSPECT cohorts; we excluded those who had never been pregnant. In total, 5,358 answered the question: ‘Did you suffer from high blood pressure during pregnancy?’ with ‘Yes’; and 15,266 with ‘No’. Outcome definition was equal to that in the original PCE model. MORGEN and PROSPECT were analyzed separately, because of differences in characteristics (e.g. MORGEN is younger and has more current smokers) and observed risks. First, we calculated the 10-year predicted risk and compared this with the observed risk. Subsequently, the model was updated in three steps: by recalibrating the mean linear predictor, by additionally updating the baseline hazard, and by refitting the full model. The performance of all models was quantified by calibration (calibration plot, expected:observed ratio) and discrimination (c-statistic). Results: The Table shows that the original model over-predicts risk in all women, but more in women without HDP. Calibration plots improved most after refitting, which is confirmed by the expected:observed ratio, although the model still over-predicts. Refitting only improved discrimination in women with HDP, but not in women without HDP. Conclusion: The PCE over-predicts risk in women with and without HDP, even after refitting the model. Discrimination is overall quite good, except for MORGEN women without HDP. Especially in women with HDP the model discrimination benefits from refitting.


2019 ◽  
Vol 40 (Supplement_1) ◽  
Author(s):  
J C L Himmelreich ◽  
L Veelers ◽  
W A M Lucassen ◽  
H C P M Van Weert ◽  
R E Harskamp

Abstract Background Atrial fibrillation (AF) presents a considerable burden on our health care systems. Early detection of AF may prevent AF-associated complications, such as stroke and heart failure. Given our aging populations, the number of new AF cases is expected to double over the next decades. As such, there is renewed interest to screen for AF in the community. To optimise screening efforts, risk prediction models may help us identify at-risk patients. Purpose To identify and evaluate the performance of prediction models for AF that may be applicable for screening in community settings. Methods We searched PubMed, Embase, and CINAHL databases for studies that derived and/or validated AF risk models from population-based cohorts. Three investigators independently assessed risk of bias (CHARMS checklist), and performed data extraction and evidence synthesis. The primary expression of associations in meta-analysis was the C-statistic for discrimination between AF and non-AF cases during follow-up, using a random effects model. We calculated 95% prediction intervals (PI) due to high heterogeneity (I2 >30%) in all analyses. Results We identified 23 studies that presented data on 8 risk models derived from 18 cohorts with a total of 1,4 million participants from across the globe. Average age in these cohorts ranged from 43–76 years and follow-up ranged from 3 to 20 years. Two of the 8 risk models had a sufficient number of validation studies to be included in the meta-analysis. The CHARGE-AF (Cohorts for Heart and Aging Research in Genomic Epidemiology) score had a summary C-statistic of 0.72 (95%-PI: 0.67–0.77; n=7 cohorts, n=53.040 patients). The FHS (Framingham Heart Study) score for AF had a summary C-statistic of 0.71 (95%-PI: 0.59–0.83; n=4 cohorts, n=19.300 patients). Both models include age, height and weight, blood pressure, prevalent heart failure, and antihypertensive medication use as variables. CHARGE-AF additionally includes race, current smoking, and history of diabetes and myocardial infarction. FHS additionally includes sex, PR interval, and significant murmur. Conclusions Currently two risk scores, CHARGE-AF and FHS, have been rigorously tested for predicting atrial fibrillation in general populations. The CHARGE-AF score may present the more promising, user-friendly score for future community screening efforts, as it solely relies on readily available clinical parameters. Acknowledgement/Funding This work was supported by the Netherlands Organisation for Health Research and Development (ZonMw) [80-83910-98-13046]


Cardiology ◽  
2021 ◽  
pp. 1-9
Author(s):  
Diego Carlo Castini ◽  
Simone Persampieri ◽  
Ludovico Sabatelli ◽  
Federica Valli ◽  
Giulia Ferrante ◽  
...  

<b><i>Introduction:</i></b> This study analyzes the usefulness of the CHA2DS2-VASc score for mortality prediction in patients with acute coronary syndromes (ACSs) and evaluates if the addition of renal functional status could improve its predictive accuracy. <b><i>Methods:</i></b> CHA2DS2-VASc score was calculated by using both the original scoring system and adding renal functional status using 3 alternative renal dysfunction definitions (CHA2DS2-VASc-R1: eGFR &#x3c;60 mL/min/1.73 mq = 1 point; CHA2DS2-VASc-R2: eGFR &#x3c;60 mL/min/1.73 mq = 2 points; and CHA2DS2-VASc-R3: eGFR &#x3c;60 mL/min/1.73 mq = 1 point, &#x3c;30 mL/min/1.73 mq = 2 points). Inhospital mortality (IHM) and post-discharge mortality (PDM) were recorded, and discrimination of the various risk models was evaluated. Finally, the net reclassification index (NRI) was calculated to compare the mortality risk classification of the modified risk models with that of the original score. <b><i>Results:</i></b> Nine hundred and eight ACS patients (median age 68 years, 30% female, 51% ST-elevation) composed the study population. Of the 871 patients discharged, 865 (99%) completed a 12-month follow-up. The IHM rate was 4.1%. The CHA2DS2-VASc score demonstrated a good discriminative performance for IHM (C-statistic 0.75). Although all the eGFR-modified risk models showed higher C-statistics than the original model, a statistically significant difference was observed only for CHA2DS2-VASc-R3. The PDM rate was 4.5%. The CHA2DS2-VASc C-statistic for PDM was 0.75, and all the modified risk models showed significantly higher C-statistics values than the original model. The NRI analysis showed similar results. <b><i>Conclusions:</i></b> CHA2DS2-VASc score demonstrated a good predictive accuracy for IHM and PDM in ACS patients. The addition of renal dysfunction to the original score has the potential to improve identification of patients at the risk of death.


Blood ◽  
2015 ◽  
Vol 126 (23) ◽  
pp. 1014-1014
Author(s):  
Anthony Q Pham ◽  
Megan M. O'Byrne ◽  
Prashant Kapoor ◽  
Mithun Vinod Shah ◽  
Roshini S Abraham ◽  
...  

Abstract INTRODUCTION: Hemophagocytic lymphohistiocytosis (HLH) is a rare disorder caused by the pathologic activation of the immune system. In children, either a molecular diagnosis consistent with HLH or five out of the following eight criteria are considered necessary for a diagnosis of HLH (HLH-04 criteria): 1) fever; 2) splenomegaly; 3) cytopenia in two or more cell lines; 4) hypertriglyceridemia (≥265 mg/dL) or hypofibrinogenemia (≤150 mg/dL); 5) hemophagocytosis in the bone marrow, spleen, or lymph nodes; 6) hyperferritinemia (≥500 mcg/L); 7) impaired NK cell function; and 8) elevated soluble CD25 (sCD25). These criteria have been extrapolated to diagnose HLH in adults; however, it's unclear if these same criteria are applicable in the adult population. METHODS: We reviewed the Mayo Clinic electronic medical record for all adult (≥18 years) hospitalized patients with an admission serum ferritin of ≥500 mcg/L from January 2012 through December 2014. Patients' charts were reviewed and those who met the HLH-04 criteria were considered to have HLH. For the remainder of the patients, the etiology for hyperferritinemia was determined based on chart review and discharge diagnoses. Logistic regression models were used to assess the ability of these values in predicting a diagnosis of HLH. The Mayo Clinic IRB approved this study. RESULTS: We identified 1,329 patients with a serum ferritin ≥500 mcg/L. Of these, HLH was diagnosed in 28 (2.1%) patients (malignancy-associated HLH in 11 patients, infection-associated HLH in 4 patients, autoimmune-associated HLH in 7 patients, and idiopathic HLH in 6 patients). Table 1 describes the etiology of hyperferritinemia in the remaining 1,301 patients. In contrast to pediatric hospitalized patients (Allen, Pediatr Blood Cancer, 2008), adults are more likely to have malignancy (28.1% vs 7%; p<0.05); bacterial infection (21% vs. 13%; p=0.001); liver disease (9.9% vs. 2.7%; p<0.05); and cardiac disease (7.2% vs. 1.5%; p=0.0001) as the etiology of hyperferritinemia during their hospitalization. Among all patients in the study, the following variables were associated with higher odds of having HLH compared to hyperferritinemia due to an alternate cause: elevated ferritin (odds ratio [OR] 35.97, p<0.01); thrombocytopenia (OR 15.22, p<0.01), cytopenias defined by HLH-04 criteria (OR 8.04, p<0.01); elevated admission serum bilirubin (OR 1.05, p=0.02); and peak serum bilirubin (OR 1.05, p<0.01). After stepwise selection in multivariate analysis, serum ferritin ≥2,600 mcg/L and platelets ≤100 x 109/L were independently associated with HLH diagnosis (OR 24.9 and 7.8 respectively; p<0.01 for both). The area under the curve (AUC or c-statistic) ranges from 0.5 for no ability to discriminate, to 1.0 for perfect discrimination; this model has an AUC of 0.91, which shows very good discrimination between cases and controls. An adult hospitalized patient with a combination of serum ferritin ≥2,600 mcg/L and platelets ≤100x109/L predicts a ~200 fold increased likelihood of being diagnosed with HLH as compared to hospitalized adult meeting neither of these criteria (Figures 1 and 2). CONCLUSION: In contrast to hospitalized pediatric patients, the etiology of hyperferritinemia in adults is more likely to be malignancy, bacterial infection, liver disease, and cardiac disease. We conclude that a combination of serum ferritin ≥2,600 mcg/L and platelet count ≤100 x 109/L can be used as screening criteria to help identify adult patients most likely to have HLH. The traditional 5/8 criteria to diagnose HLH in pediatric patients do not appear necessary for establishing a diagnosis of HLH in adult patients. These observations need replication in an independent data set prior to broader applicability. Table 1.Patients with ferritin > 500 mcg/L with underlying cause for elevation.Number Diagnosed (%)Cardiac Disease95 (7.2%)Liver Disease131 (9.9%)Renal Disease87 (6.6%)Infectious343 (25.8%)Malignancy373 (28.1%)Autoimmune118 (8.9%)Solid Organ Transplant34 (2.6%)Stem Cell Transplant16 (1.2%)Bone Marrow Failure9 (0.7%)Shock39 (2.9%)Idiopathic53 (4.0%)Hemoglobinopathies3 (0.2%) Figure 1. ROC curve for the multivariable model with ferritin > 2,600 (yes/no) and platelets below 100,000 (yes/no) in the model with a very good discrimination c-statistic of 0.91. Figure 1. ROC curve for the multivariable model with ferritin > 2,600 (yes/no) and platelets below 100,000 (yes/no) in the model with a very good discrimination c-statistic of 0.91. Figure 2. Figure 2. Disclosures No relevant conflicts of interest to declare.


Sign in / Sign up

Export Citation Format

Share Document