scholarly journals The impact of lactate clearance on outcomes according to infection sites in patients with sepsis: a retrospective observational study

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Momoko Sugimoto ◽  
Wataru Takayama ◽  
Kiyoshi Murata ◽  
Yasuhiro Otomo

AbstractWhether lactate clearance (LC) influences outcomes differently depending on the infection site in sepsis cases is not fully elucidated. Herein, we analyzed LC’s clinical utility as a predictor of patient outcomes according to infection site. This retrospective study, conducted at two tertiary emergency critical care medical centers in Japan, included patients with sepsis or septic shock. The associations between infection site (lungs vs. other organs) and in-hospital mortality and ventilator-free days (VFDs) were evaluated using univariable and multivariate analyses. We assessed LC’s ability to predict in-hospital mortality using the area under the receiver operating characteristic curve. Among 369 patients with sepsis, infection sites were as follows: lungs, 186 (50.4%); urinary tract, 45 (12.2%); abdomen, 102 (27.6%); and other, 36 (9.8%). Patients were divided into a pneumonia group or non-pneumonia group depending on their infection site. The pneumonia group displayed a higher in-hospital mortality than the non-pneumonia group (24.2% vs. 15.8%, p = 0.051). In the multivariate analysis, lower LC was associated with higher in-hospital mortality [adjusted odds ratio (AOR), 0.97; 95% confidence interval (CI) 0.96–0.98; p < 0.001] and fewer VFD [adjusted difference p value (AD), − 1.23; 95% CI − 2.42 to − 0.09; p = 0.025] in the non-pneumonia group. Conversely, LC did not affect in-hospital mortality (AOR 0.99; 95% CI 0.99–1.00; p = 0.134) and VFD (AD − 0.08; 95% CI − 2.06 to 1.91; p = 0.854) in the pneumonia group. Given the differences in the impact of LC on outcomes between the pneumonia and non-pneumonia groups, this study suggests that optimal treatment strategies might improve outcomes. Further studies are warranted to validate our results and develop optimal therapeutic strategies for sepsis patients.

2016 ◽  
Vol 32 (8) ◽  
pp. 473-479 ◽  
Author(s):  
Christine A. Motzkus ◽  
Roger Luckmann

Purpose: Sepsis treatment protocols emphasize source control with empiric antibiotics and fluid resuscitation. Previous reviews have examined the impact of infection site and specific pathogens on mortality from sepsis; however, no recent review has addressed the infection site. This review focuses on the impact of infection site on hospital mortality among patients with sepsis. Methods: The PubMed database was searched for articles from 2001 to 2014. Studies were eligible if they included (1) one or more statistical models with hospital mortality as the outcome and considered infection site for inclusion in the model and (2) adult patients with sepsis, severe sepsis, or septic shock. Data abstracted included stage of sepsis, infection site, and raw and adjusted effect estimates. Nineteen studies were included. Infection sites most studied included respiratory (n = 19), abdominal (n = 19), genitourinary (n = 18), and skin and soft tissue infections (n = 11). Several studies found a statistically significant lower mortality risk for genitourinary infections on hospital mortality when compared to respiratory infections. Conclusion: Based on studies included in this review, the impact of infection site in patients with sepsis on hospital mortality could not be reliably estimated. Misclassification among infections and disease states remains a serious possibility in studies on this topic.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Yan Luo ◽  
Zhiyu Wang ◽  
Cong Wang

Abstract Background Prognostication is an essential tool for risk adjustment and decision making in the intensive care units (ICUs). In order to improve patient outcomes, we have been trying to develop a more effective model than Acute Physiology and Chronic Health Evaluation (APACHE) II to measure the severity of the patients in ICUs. The aim of the present study was to provide a mortality prediction model for ICUs patients, and to assess its performance relative to prediction based on the APACHE II scoring system. Methods We used the Medical Information Mart for Intensive Care version III (MIMIC-III) database to build our model. After comparing the APACHE II with 6 typical machine learning (ML) methods, the best performing model was screened for external validation on anther independent dataset. Performance measures were calculated using cross-validation to avoid making biased assessments. The primary outcome was hospital mortality. Finally, we used TreeSHAP algorithm to explain the variable relationships in the extreme gradient boosting algorithm (XGBoost) model. Results We picked out 14 variables with 24,777 cases to form our basic data set. When the variables were the same as those contained in the APACHE II, the accuracy of XGBoost (accuracy: 0.858) was higher than that of APACHE II (accuracy: 0.742) and other algorithms. In addition, it exhibited better calibration properties than other methods, the result in the area under the ROC curve (AUC: 0.76). we then expand the variable set by adding five new variables to improve the performance of our model. The accuracy, precision, recall, F1, and AUC of the XGBoost model increased, and were still higher than other models (0.866, 0.853, 0.870, 0.845, and 0.81, respectively). On the external validation dataset, the AUC was 0.79 and calibration properties were good. Conclusions As compared to conventional severity scores APACHE II, our XGBoost proposal offers improved performance for predicting hospital mortality in ICUs patients. Furthermore, the TreeSHAP can help to enhance the understanding of our model by providing detailed insights into the impact of different features on the disease risk. In sum, our model could help clinicians determine prognosis and improve patient outcomes.


2020 ◽  
Author(s):  
Jun Ke ◽  
Yiwei Chen ◽  
Xiaoping Wang ◽  
Zhiyong Wu ◽  
qiongyao Zhang ◽  
...  

Abstract BackgroundThe purpose of this study is to identify the risk factors of in-hospital mortality in patients with acute coronary syndrome (ACS) and to evaluate the performance of traditional regression and machine learning prediction models.MethodsThe data of ACS patients who entered the emergency department of Fujian Provincial Hospital from January 1, 2017 to March 31, 2020 for chest pain were retrospectively collected. The study used univariate and multivariate logistic regression analysis to identify risk factors for in-hospital mortality of ACS patients. The traditional regression and machine learning algorithms were used to develop predictive models, and the sensitivity, specificity, and receiver operating characteristic curve were used to evaluate the performance of each model.ResultsA total of 7810 ACS patients were included in the study, and the in-hospital mortality rate was 1.75%. Multivariate logistic regression analysis found that age and levels of D-dimer, cardiac troponin I, N-terminal pro-B-type natriuretic peptide (NT-proBNP), lactate dehydrogenase (LDH), high-density lipoprotein (HDL) cholesterol, and calcium channel blockers were independent predictors of in-hospital mortality. The study found that the area under the receiver operating characteristic curve of the models developed by logistic regression, gradient boosting decision tree (GBDT), random forest, and support vector machine (SVM) for predicting the risk of in-hospital mortality were 0.963, 0.960, 0.963, and 0.959, respectively. Feature importance evaluation found that NT-proBNP, LDH, and HDL cholesterol were top three variables that contribute the most to the prediction performance of the GBDT model and random forest model.ConclusionsThe predictive model developed using logistic regression, GBDT, random forest, and SVM algorithms can be used to predict the risk of in-hospital death of ACS patients. Based on our findings, we recommend that clinicians focus on monitoring the changes of NT-proBNP, LDH, and HDL cholesterol, as this may improve the clinical outcomes of ACS patients.


2021 ◽  
Vol 9 (B) ◽  
pp. 1561-1564
Author(s):  
Ngakan Ketut Wira Suastika ◽  
Ketut Suega

Introduction: Coronavirus disease 2019 (Covid-19) can cause coagulation parameters abnormalities such as an increase of D-dimer levels especially in severe cases. The purpose of this study is to determine the differences of D-dimer levels in severe cases of Covid-19 who survived and non-survived and determine the optimal cut-off value of D-dimer levels to predict in-hospital mortality. Method: Data were obtained from confirmed Covid-19 patients who were treated from June to September 2020. The Mann-Whitney U test was used to determine differences of D-dimer levels in surviving and non-surviving patients. The optimal cut-off value and area under the curve (AUC) of the D-dimer level in predicting mortality were obtained by the receiver operating characteristic curve (ROC) method. Results: A total of 80 patients were recruited in this study. Levels of D-dimer were significantly higher in non-surviving patients (median 3.346 mg/ml; minimum – maximum: 0.939 – 50.000 mg/ml) compared to surviving patients (median 1.201 mg/ml; minimum – maximum: 0.302 – 29.425 mg/ml), p = 0.012. D-dimer levels higher than 1.500 mg/ml are the optimal cut-off value for predicting mortality in severe cases of Covid-19 with a sensitivity of 80.0%; specificity of 64.3%; and area under the curve of 0.754 (95% CI 0.586 - 0.921; p = 0.010). Conclusions: D-dimer levels can be used as a predictor of mortality in severe cases of Covid-19.


Perfusion ◽  
2019 ◽  
Vol 34 (7) ◽  
pp. 568-577 ◽  
Author(s):  
Giuseppe Gatti ◽  
Elisabetta Rauber ◽  
Gabriella Forti ◽  
Bernardo Benussi ◽  
Marco Gabrielli ◽  
...  

Introduction: Safe cross-clamp time using single-dose Custodiol®–histidine-tryptophan-ketoglutarate cardioplegia has not been established conclusively. Methods: Immediate post-operative outcomes of 1,420 non-consecutive, cardiac surgery patients were reviewed retrospectively. Predictors of a combined endpoint made of in-hospital mortality and any major complication post-surgery were found with the multivariable method. Analysis of variance was used to evaluate the impact of cross-clamp time on most relevant complications. Discriminatory power and cut-off value of cross-clamp time were established for in-hospital mortality and each of the major complications (receiver operating characteristic curve analysis). A comparative analysis (with propensity matching) with multidose cold blood cardioplegia on in-hospital mortality post-surgery was performed in non-coronary surgery patients. Results: Coronary, aortic valve and mitral valve surgery and surgery on thoracic aorta were performed in 45.4%, 41.9%, 49.5%, 20.6% of cases, respectively. In-hospital mortality and the rate of any major complication post-surgery were 6.5% and 41.9%, respectively. Cross-clamp time had significant impact on in-hospital mortality and almost all major post-operative complications, except neurological dysfunctions (p = 0.084), myocardial infarction (p = 0.12) and mesenteric ischaemia (p = 0.85). Areas under the receiver operating characteristic curve and the optimal cut-off values for in-hospital mortality and any major complication were of 0.657, 0.594, >140 and >127 minutes, respectively. Comorbidities-adjusted odds ratio for any major complication of cross-clamp time <127 minutes was 1.86 (p < 0.0001). Despite similar in-hospital mortality (p = 0.57), there was an earlier significant increase of mortality in Custodiol–HTK than in multidose cold blood propensity-matched, non-coronary surgery patients. Conclusions: The use of Custodiol–HTK cardioplegia is associated with a low risk of serious post-operative complications provided that cross-clamp time is of 2 hours or less.


2021 ◽  
Vol 8 ◽  
Author(s):  
Chengyu Liu ◽  
Mingwei Zhu ◽  
Xin Yang ◽  
Hongyuan Cui ◽  
Zijian Li ◽  
...  

The controlling nutritional status (CONUT) score assesses nutritional status and is associated with short- and long-term prognoses in some diseases, but the significance of the CONUT score for the prediction of in-hospital mortality in older adults is unknown. The purpose was to determine the importance of the CONUT score for the prediction of in-hospital mortality, short-term complications, length of hospital stay, and hospital costs in older adults. Our retrospective cohort study analyzed data from 11,795 older adult patients from two multicenter cohort studies. We performed receiver operating characteristic curve analysis using in-hospital mortality as the endpoint and determined the appropriate CONUT score cut-off by the Youden index. The patients were divided into two high and low groups according to the CONUT cut-off value, and the differences in clinical characteristics and in-hospital clinical outcomes between the two groups were compared. We compared the accuracy of the CONUT score and other nutrition-related tools in predicting in-hospital mortality by calculating the area under the receiver operating characteristic curve and performed univariate and multivariate analyses of predictors of in-hospital mortality. Among all the patients, 178 (1.5%) patients experienced in-hospital death. The optimal cut-off values was 5.5 for the CONUT score. The high CONUT group had a higher incidence of short-term complications and prolonged hospital stay than the low CONUT group (CONUT score &lt;6), but hospital costs were not significantly higher. The CONUT score had the highest predictive ability for in-hospital mortality among the five nutrition-related parameters compared. Multivariate analysis showed that a high CONUT score (CONUT score ≥ 6) was an independent predictor of in-hospital mortality. In conclusion, the present study demonstrated that the CONUT score could be used to predict in-hospital mortality in older adults.


Bioanalysis ◽  
2019 ◽  
Vol 11 (17) ◽  
pp. 1593-1604 ◽  
Author(s):  
Meenu Wadhwa ◽  
Robin Thorpe

Understanding of the determinants of immunogenicity, the testing paradigm, the impact of antibody attributes on clinical outcomes and regulatory guidance is leading to harmonized practices for immunogenicity assessment of biotherapeutics. However, generation of robust immunogenicity data for inclusion in product labels to support clinical practice continues to be a challenge. Assays, protocols and antibody positive controls/standards need to be developed in sufficient time to allow assessment of clinical immunogenicity using validated methods and optimized protocols. Standardization and harmonization play a significant role in achieving acceptable results. Harmonization in the postapproval setting is crucial for a valid interpretation of the product's immunogenicity and its clinical effects. Efforts are ongoing to standardize assays where possible for antibody measurement and for measuring product/drug levels by producing reference standards. Provision of such standards will help toward personalized treatment strategies with better patient outcomes.


2018 ◽  
Vol 36 (4_suppl) ◽  
pp. 502-502
Author(s):  
Kiwoon Joshua Baeg ◽  
Cynthia Harris ◽  
Mi Ri Lee ◽  
Jacob Andrew Martin ◽  
Sheila Rustgi ◽  
...  

502 Background: Gastroenteropancreatic neuroendocrine tumors (GEP-NETs) are relatively rare tumors, where patients seek care at medical centers with varying levels of expertise. While treatment center volume is associated with better survival in multiple cancers, it remains unknown whether the same applies to GEP-NETs. The objective of this study was to assess the impact of center volume on GEP-NET treatment outcomes. Methods: We used the Surveillance, Epidemiology, and End Results (SEER) registry linked to Medicare claims data in this study. We included patients diagnosed between 1995-2010 who had no HMO coverage, participated in Medicare parts A and B, were older than 65 at diagnosis, had full tumor grade information, and had no secondary cancer. We used Medicare claims to identify the medical centers at which patients received GEP-NET treatment (surgery, chemotherapy, somatostatin analogues, or radiation therapy). Center volume was divided into tiers – low, medium, and high – based on the number of unique GEP-NET patients treated by a medical center over two years. Kaplan-Meier curves and Cox regression were used to assess the association between volume and disease-specific survival (DSS). Results: We identified 1025 GEP-NET patients, of whom 65%, 28%, and 7% received treatment at low, medium, and high volume centers, respectively. Surgery was the most common first treatment (84-90%). Comorbidity and tumor stage distribution were similar across tiers, but the distribution of patients with poorly-differentiated tumors differed significantly (p < 0.001). Median DSS for patients at low and medium centers were 3.7 years and 6.6 years, respectively, but was not reached for patients at high volume centers. After adjusting for confounders, patients treated at high volume centers had better survival than those treated in low volume centers (HR: 0.55, 95% CI: 0.30-0.99). However, no difference in survival was noted at medium volume centers (HR: 0.98, 95% CI: 0.78-1.22). Conclusions: Our results suggest that centers with expertise in GEP-NET treatment have better patient outcomes. Thus, centralization of care, particularly of more difficult cases, may lead to improved patient outcomes.


2018 ◽  
Vol 26 (1) ◽  
pp. 34-44 ◽  
Author(s):  
Muhammad Faisal ◽  
Andy Scally ◽  
Robin Howes ◽  
Kevin Beatson ◽  
Donald Richardson ◽  
...  

We compare the performance of logistic regression with several alternative machine learning methods to estimate the risk of death for patients following an emergency admission to hospital based on the patients’ first blood test results and physiological measurements using an external validation approach. We trained and tested each model using data from one hospital ( n = 24,696) and compared the performance of these models in data from another hospital ( n = 13,477). We used two performance measures – the calibration slope and area under the receiver operating characteristic curve. The logistic model performed reasonably well – calibration slope: 0.90, area under the receiver operating characteristic curve: 0.847 compared to the other machine learning methods. Given the complexity of choosing tuning parameters of these methods, the performance of logistic regression with transformations for in-hospital mortality prediction was competitive with the best performing alternative machine learning methods with no evidence of overfitting.


2011 ◽  
Vol 2011 ◽  
pp. 1-8 ◽  
Author(s):  
João M. Silva ◽  
Amanda M. Ribas R. Oliveira ◽  
Juliano Lopes Segura ◽  
Marcel Henrique Ribeiro ◽  
Carolina Nacevicius Sposito ◽  
...  

Background. This study evaluated whether large venous-arterial CO2gap (PCO2gap) preoperatively is associated to poor outcome.Method. Prospective study which included adult high-risk surgical patients. The patients were pooled into two groups: wide [P(v-a)CO2] versus narrow [P(v-a)CO2]. In order to determine the best value to discriminate hospital mortality, it was applied a ROC (receiver operating characteristic) curve for the [P(v-a)CO2] values collected preoperatively, and the most accurate value was chosen as cut-off to define the groups.Results. The study included 66 patients. The [P(v-a)CO2] value preoperatively that best discriminated hospital mortality was 5.0 mmHg,area=0.73. Preoperative patients with [P(v-a)CO2] more than 5.0 mmHg presented a higher hospital mortality (36.4% versus 4.5% P=0.004), higher prevalence of circulatory shock (56.8% versus 22.7% P=0.01) and acute renal failure postoperatively (27.3% versus 4.5% P=0.02), and longer hospital length of stays 20.0 (14.0–30.0) versus 13.5 (9.0–25.0) daysP=0.01.Conclusions. The PCO2gap values more than 5.0 mmHg preoperatively were associated with worse postoperatively outcome.


Sign in / Sign up

Export Citation Format

Share Document