IntroductionIdentifying SARS-CoV-2 patients at higher risk of mortality is crucial in the management of a pandemic. Artificial intelligence techniques allow to analyze big amount of data to find hidden patterns. We aimed to develop and validate a mortality score at admission for COVID-19 based on high-level machine learning.Material and methodsWe conducted a retrospective cohort study on hospitalized adults COVID-19 patients between March and December 2020. The primary outcome was in-hospital mortality. A machine learning approach on vital parameters, laboratory values, and demographic features was applied to develop different models. Then, a feature importance analysis was performed to reduce the number of variables included in the model, to develop a risk score with good overall performance, that was finally evaluated in terms of discrimination and calibration capabilities. All results underwent cross-validation.Results1,135 consecutive patients (median age 70 years, 64% males) were enrolled, 48 patients were excluded, the cohort was randomly divided in training (760) and test (327). During hospitalization, 251 (22%) patients died. After feature selection, the best performing classifier was random forest (AUC 0.88±0.03). Based on the relative importance of each variable, a pragmatic score was developed, showing good performances (AUC 0.85, ±0.025), and three levels were defined that correlated well with in-hospital mortality.ConclusionsMachine learning techniques were applied in order to develop an accurate in-hospital mortality risk score for COVID-19 based on ten variables. The application of the proposed score has utility in clinical settings to guide the management and prognostication of COVID-19 patients.
Background:Patients who are identified to be at a higher risk of mortality from COVID-19 should receive better treatment and monitoring. This study aimed to propose a simple yet accurate risk assessment tool to help decision-making in the management of the COVID-19 pandemic.
Methods: From Jul to Nov 2020, 5454 patients from Fars Province, Iran, diagnosed with COVID-19 were enrolled. A multiple logistic regression model was trained on one dataset (training set: n=4183) and its prediction performance was assessed on another dataset (testing set: n=1271). This model was utilized to develop the COVID-19 risk-score in Fars (CRSF).
Results: Five final independent risk factors including gender (male: OR=1.37), age (60-80: OR=2.67 and >80: OR=3.91), SpO2 (≤85%: OR=7.02), underlying diseases (yes: OR=1.25), and pulse rate (<60: OR=2.01 and >120: OR=1.60) were significantly associated with in-hospital mortality. The CRSF formula was obtained using the estimated regression coefficient values of the aforementioned factors. The point values for the risk factors varied from 2 to 19 and the total CRSF varied from 0 to 45. The ROC analysis showed that the CRSF values of ≥15 (high-risk patients) had a specificity of 73.5%, sensitivity of 76.5%, positive predictive value of 23.2%, and negative predictive value (NPV) of 96.8% for the prediction of death (AUC=0.824, P<0.0001).
Conclusion:This simple CRSF system, which has a high NPV,can be useful for predicting the risk of mortality in COVID-19 patients. It can also be used as a disease severity indicator to determine triage level for hospitalization.
Osteoporosis is a major health concern in aging populations, where 54% of the U.S. population aged 50 and older have low bone mineral density (BMD). Increases in inflammation and oxidative stress play a major role in the development of osteoporosis. Men are at a greater risk of mortality due to osteoporosis-related fractures. Our earlier findings in rodent male and female models of osteoporosis, as well as postmenopausal women strongly suggest the efficacy of prunes (dried plum) in reducing inflammation and preventing/reversing bone loss. The objective of this study was to examine the effects of two doses of prunes, daily, on biomarkers of inflammation and bone metabolism in men with some degree of bone loss (BMD; t-score between −0.1 and −2.5 SD), for three months. Thirty-five men between the ages of 55 and 80 years were randomized into one of three groups: 100 g prunes, 50 g prunes, or control. Consumption of 100 g prunes led to a significant decrease in serum osteocalcin (p < 0.001). Consumption of 50 g prunes led to significant decreases in serum osteoprotegerin (OPG) (p = 0.003) and serum osteocalcin (p = 0.040), and an increase in the OPG:RANKL ratio (p = 0.041). Regular consumption of either 100 g or 50 g prunes for three months may positively affect bone turnover.
AbstractWomen have a longer life expectancy than men in the general population. However, it has remained unclear whether this advantage is maintained in patients undergoing maintenance hemodialysis. The aim of this study was to compare the risk of mortality, especially infection-related mortality, between male and female hemodialysis patients. A total of 3065 Japanese hemodialysis patients aged ≥ 18 years old were followed up for 10 years. The primary outcomes were all-cause and infection-related mortality. The associations between sex and these outcomes were examined using Cox proportional hazards models. During the median follow-up of 8.8 years, 1498 patients died of any cause, 387 of whom died of infection. Compared with men, the multivariable-adjusted hazard ratios (95% confidence interval) for all-cause and infection-related mortality in women were 0.51 (0.45–0.58, P < 0.05) and 0.36 (0.27–0.47, P < 0.05), respectively. These findings remained significant even when propensity score-matching or inverse probability of treatment weighting adjustment methods were employed. Furthermore, even when the non-infection-related mortality was considered a competing risk, the infection-related mortality rate in women was still significantly lower than that in men. Regarding all-cause and infection-related deaths, women have a survival advantage compared with men among Japanese patients undergoing maintenance hemodialysis.
We evaluated whether the time between first respiratory support and intubation of patients receiving invasive mechanical ventilation (IMV) due to COVID-19 was associated with mortality or pulmonary sequelae.
Materials and methods
Prospective cohort of critical COVID-19 patients on IMV. Patients were classified as early intubation if they were intubated within the first 48 h from the first respiratory support or delayed intubation if they were intubated later. Surviving patients were evaluated after hospital discharge.
We included 205 patients (140 with early IMV and 65 with delayed IMV). The median [p25;p75] age was 63 [56.0; 70.0] years, and 74.1% were male. The survival analysis showed a significant increase in the risk of mortality in the delayed group with an adjusted hazard ratio (HR) of 2.45 (95% CI 1.29–4.65). The continuous predictor time to IMV showed a nonlinear association with the risk of in-hospital mortality. A multivariate mortality model showed that delay of IMV was a factor associated with mortality (HR of 2.40; 95% CI 1.42–4.1). During follow-up, patients in the delayed group showed a worse DLCO (mean difference of − 10.77 (95% CI − 18.40 to − 3.15), with a greater number of affected lobes (+ 1.51 [95% CI 0.89–2.13]) and a greater TSS (+ 4.35 [95% CI 2.41–6.27]) in the chest CT scan.
Among critically ill patients with COVID-19 who required IMV, the delay in intubation from the first respiratory support was associated with an increase in hospital mortality and worse pulmonary sequelae during follow-up.
Viral hepatitis E clinically ranges from self-limiting hepatitis to lethal liver failure. Oxidative stress has been shown to mediate hepatic inflammation during HBV-induced liver failure. We investigated whether a biomarker of oxidative stress may be helpful in assessing severity and disease outcomes of patients with HEV-induced liver failure.
Clinical data were obtained from patients with HEV-induced acute viral hepatitis (AVH, n = 30), acute liver failure (ALF, n = 17), and acute-on-chronic liver failure (ACLF, n = 36), as well as from healthy controls (HC, n = 30). The SOD and HMGB1 levels were measured in serum by ELISA. HL-7702 cells were cultured and stimulated by serum from HEV-infected patients or by HMGB1; oxidative status was investigated by CellROX and apoptosis was investigated by flow cytometry.
Patients with HEV-induced liver failure (including ALF and ACLF) showed increased SOD levels compared with HEV-AVH patients and healthy controls. SOD levels > 400 U/mL were associated with a significantly higher risk of mortality in HEV-ALF and HEV-ACLF patients. Serum from HEV-infected patients led to ROS accumulation, HMGB1 secretion, and apoptosis in HL-7702 cells. Antioxidant treatment successfully inhibited HEV-induced HMGB1 secretion, and HMGB1 promoted apoptosis in HL-7702 cells.
HEV increased oxidative stress in the pathogenesis of HEV-induced hepatic diseases. Early testing of serum SOD may serve as a predictor of both HEV-ALF and HEV-ACLF outcomes. Moreover, development of strategies for modulating oxidative stress might be a potential target for treating HEV-induced liver failure patients.
Background: Few studies have characterized electrocardiography (ECG) patterns correlated with left ventricular (LV) systolic dysfunction in patients with non-ST segment elevation acute coronary syndrome (NSTE-ACS).Objectives: This study aims to develop ECG pattern-derived scores to predict LV systolic dysfunction in NSTE-ACS patients.Methods: A total of 466 patients with NSTE-ACS were retrospectively enrolled. LV ejection fraction (LVEF) was assessed by echocardiography within 72 h after the first triage ECG acquisition; there was no coronary intervention in between. ECG score was developed to predict LVEF < 40%. Performance of LVEF, the Global Registry of Acute Coronary Events (GRACE), Thrombolysis in Myocardial Infarction (TIMI) and ECG scores to predict 24-month all-cause mortality were analyzed. Subgroups with varying LVEF, GRACE and TIMI scores were stratified by ECG score to identify patients at high risk of mortality.Results: LVEF < 40% was present in 20% of patients. We developed the PQRST score by multivariate logistic regression, including poor R wave progression, QRS duration > 110 ms, heart rate > 100 beats per min, and ST-segment depression ≥ 1 mm in ≥ 2 contiguous leads, ranging from 0 to 6.5. The score had an area under the curve (AUC) of 0.824 in the derivation cohort and 0.899 in the validation cohort for discriminating LVEF < 40%. A PQRST score ≥ 3 could stratify high-risk patients with LVEF ≥ 40%, GRACE score > 140, or TIMI score ≥ 3 regarding 24-month all-cause mortality.Conclusions: The PQRST score could predict LVEF < 40% in NSTE-ACS patients and identify patients at high risk of mortality in the subgroups of patients with LVEF ≥ 40%, GRACE score > 140 or TIMI score ≥ 3.