scholarly journals Understanding the complexity of sepsis mortality prediction via rule discovery and analysis: a pilot study

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Ying Wu ◽  
Shuai Huang ◽  
Xiangyu Chang

Abstract Background Sepsis, defined as life-threatening organ dysfunction caused by a dysregulated host response to infection, has become one of the major causes of death in Intensive Care Units (ICUs). The heterogeneity and complexity of this syndrome lead to the absence of golden standards for its diagnosis, treatment, and prognosis. The early prediction of in-hospital mortality for sepsis patients is not only meaningful to medical decision making, but more importantly, relates to the well-being of patients. Methods In this paper, a rule discovery and analysis (rule-based) method is used to predict the in-hospital death events of 2021 ICU patients diagnosed with sepsis using the MIMIC-III database. The method mainly includes two phases: rule discovery phase and rule analysis phase. In the rule discovery phase, the RuleFit method is employed to mine multiple hidden rules which are capable to predict individual in-hospital death events. In the rule analysis phase, survival analysis and decomposition analysis are carried out to test and justify the risk prediction ability of these rules. Then by leveraging a subset of these rules, we establish a prediction model that is both more accurate at the in-hospital death prediction task and more interpretable than most comparable methods. Results In our experiment, RuleFit generates 77 risk prediction rules, and the average area under the curve (AUC) of the prediction model based on 62 of these rules reaches 0.781 ($$\pm 0.018$$ ± 0.018 ) which is comparable to or even better than the AUC of existing methods (i.e., commonly used medical scoring system and benchmark machine learning models). External validation of the prediction power of these 62 rules on another 1468 sepsis patients not included in MIMIC-III in ICU provides further supporting evidence for the superiority of the rule-based method. In addition, we discuss and explain in detail the rules with better risk prediction ability. Glasgow Coma Scale (GCS), serum potassium, and serum bilirubin are found to be the most important risk factors for predicting patient death. Conclusion Our study demonstrates that, with the rule-based method, we could not only make accurate prediction on in-hospital death events of sepsis patients, but also reveal the complex relationship between sepsis-related risk factors through the rules themselves, so as to improve our understanding of the complexity of sepsis as well as its population.

BMC Cancer ◽  
2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Michele Sassano ◽  
Marco Mariani ◽  
Gianluigi Quaranta ◽  
Roberta Pastorino ◽  
Stefania Boccia

Abstract Background Risk prediction models incorporating single nucleotide polymorphisms (SNPs) could lead to individualized prevention of colorectal cancer (CRC). However, the added value of incorporating SNPs into models with only traditional risk factors is still not clear. Hence, our primary aim was to summarize literature on risk prediction models including genetic variants for CRC, while our secondary aim was to evaluate the improvement of discriminatory accuracy when adding SNPs to a prediction model with only traditional risk factors. Methods We conducted a systematic review on prediction models incorporating multiple SNPs for CRC risk prediction. We tested whether a significant trend in the increase of Area Under Curve (AUC) according to the number of SNPs could be observed, and estimated the correlation between AUC improvement and number of SNPs. We estimated pooled AUC improvement for SNP-enhanced models compared with non-SNP-enhanced models using random effects meta-analysis, and conducted meta-regression to investigate the association of specific factors with AUC improvement. Results We included 33 studies, 78.79% using genetic risk scores to combine genetic data. We found no significant trend in AUC improvement according to the number of SNPs (p for trend = 0.774), and no correlation between the number of SNPs and AUC improvement (p = 0.695). Pooled AUC improvement was 0.040 (95% CI: 0.035, 0.045), and the number of cases in the study and the AUC of the starting model were inversely associated with AUC improvement obtained when adding SNPs to a prediction model. In addition, models constructed in Asian individuals achieved better AUC improvement with the incorporation of SNPs compared with those developed among individuals of European ancestry. Conclusions Though not conclusive, our results provide insights on factors influencing discriminatory accuracy of SNP-enhanced models. Genetic variants might be useful to inform stratified CRC screening in the future, but further research is needed.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Li-Na Liao ◽  
Tsai-Chung Li ◽  
Chia-Ing Li ◽  
Chiu-Shong Liu ◽  
Wen-Yuan Lin ◽  
...  

AbstractWe evaluated whether genetic information could offer improvement on risk prediction of diabetic nephropathy (DN) while adding susceptibility variants into a risk prediction model with conventional risk factors in Han Chinese type 2 diabetes patients. A total of 995 (including 246 DN cases) and 519 (including 179 DN cases) type 2 diabetes patients were included in derivation and validation sets, respectively. A genetic risk score (GRS) was constructed with DN susceptibility variants based on findings of our previous genome-wide association study. In derivation set, areas under the receiver operating characteristics (AUROC) curve (95% CI) for model with clinical risk factors only, model with GRS only, and model with clinical risk factors and GRS were 0.75 (0.72–0.78), 0.64 (0.60–0.68), and 0.78 (0.75–0.81), respectively. In external validation sample, AUROC for model combining conventional risk factors and GRS was 0.70 (0.65–0.74). Additionally, the net reclassification improvement was 9.98% (P = 0.001) when the GRS was added to the prediction model of a set of clinical risk factors. This prediction model enabled us to confirm the importance of GRS combined with clinical factors in predicting the risk of DN and enhanced identification of high-risk individuals for appropriate management of DN for intervention.


2019 ◽  
Vol 2019 ◽  
pp. 1-11 ◽  
Author(s):  
Xiaona Jia ◽  
Mirza Mansoor Baig ◽  
Farhaan Mirza ◽  
Hamid GholamHosseini

Background and Objective. Current cardiovascular disease (CVD) risk models are typically based on traditional laboratory-based predictors. The objective of this research was to identify key risk factors that affect the CVD risk prediction and to develop a 10-year CVD risk prediction model using the identified risk factors. Methods. A Cox proportional hazard regression method was applied to generate the proposed risk model. We used the dataset from Framingham Original Cohort of 5079 men and women aged 30-62 years, who had no overt symptoms of CVD at the baseline; among the selected cohort 3189 had a CVD event. Results. A 10-year CVD risk model based on multiple risk factors (such as age, sex, body mass index (BMI), hypertension, systolic blood pressure (SBP), cigarettes per day, pulse rate, and diabetes) was developed in which heart rate was identified as one of the novel risk factors. The proposed model achieved a good discrimination and calibration ability with C-index (receiver operating characteristic (ROC)) being 0.71 in the validation dataset. We validated the model via statistical and empirical validation. Conclusion. The proposed CVD risk prediction model is based on standard risk factors, which could help reduce the cost and time required for conducting the clinical/laboratory tests. Healthcare providers, clinicians, and patients can use this tool to see the 10-year risk of CVD for an individual. Heart rate was incorporated as a novel predictor, which extends the predictive ability of the past existing risk equations.


2018 ◽  
Author(s):  
Anabela Correia Martins ◽  
Juliana Moreira ◽  
Catarina Silva ◽  
Joana Silva ◽  
Cláudia Tonelo ◽  
...  

BACKGROUND Falls are a major health problem among older adults. The risk of falling can be increased by polypharmacy, vision impairment, high blood pressure, environmental home hazards, fear of falling, and changes in the function of musculoskeletal and sensory systems that are associated with aging. Moreover, individuals who experienced previous falls are at higher risk. Nevertheless, falls can be prevented by screening for known risk factors. OBJECTIVE The objective of our study was to develop a multifactorial, instrumented, screening tool for fall risk, according to the key risk factors for falls, among Portuguese community-dwelling adults aged 50 years or over and to prospectively validate a risk prediction model for the risk of falling. METHODS This prospective study, following a convenience sample method, will recruit community-dwelling adults aged 50 years or over, who stand and walk independently with or without walking aids in parish councils, physical therapy clinics, senior’s universities, and other facilities in different regions of continental Portugal. The FallSensing screening tool is a technological solution for fall risk screening that includes software, a pressure platform, and 2 inertial sensors. The screening includes questions about demographic and anthropometric data, health and lifestyle behaviors, a detailed explanation about procedures to accomplish 6 functional tests (grip strength, Timed Up and Go, 30 seconds sit to stand, step test, 4-Stage Balance test “modified,” and 10-meter walking speed), 3 questionnaires concerning environmental home hazards, and an activity and participation profile related to mobility and self-efficacy for exercise. RESULTS The enrollment began in June 2016 and we anticipate study completion by the end of 2018. CONCLUSIONS The FallSensing screening tool is a multifactorial and evidence-based assessment which identifies factors that contribute to fall risk. Establishing a risk prediction model will allow preventive strategies to be implemented, potentially decreasing fall rate. REGISTERED REPORT IDENTIFIER RR1-10.2196/10304


2021 ◽  
Author(s):  
Xue Wang ◽  
Xiao-hui Wang

Abstract Objective To investigate the influencing factors of venous thromboembolism (VTE) after ovarian cancer surgery, and construct its prediction model. Methods A total of 67 patients with ovarian cancer who developed VTE after surgery were selected from October 2008 to June 2020 in the Department of Obstetrics and Gynecology, First Hospital of Lanzhou University, and conducted a retrospective study with 100 patients without VTE after the operation who were confirmed by imaging during the same period. The clinical data of two groups of patients were analyzed and compared, and the risk prediction model was established. The ROC curve was drawn to evaluate the prediction effect of the model. Results Univariate analysis showed that there were statistically significant differences in age, menopausal status, hypertension, neoadjuvant chemotherapy, FIGO staging, lymph node metastasis, operation time, preoperative plasma FIB and D-dimer between the thrombosis group and the non-thrombosis group;The results of multivariate analysis showed that old age, neoadjuvant chemotherapy, late FIGO staging, high levels of plasma FIB and D-dimer before surgery are independent risk factors for VTE after ovarian cancer surgery. Construct a prediction model based on the results of multivariate regression analysis: Logit(P) = 0.053 × age + 0.917 × neoadjuvant chemotherapy + 0.956 × tumor FIGO staging + 0.398 × preoperative plasma FIB + 0.531 × preoperative D-dimer -7.679 ( Neoadjuvant chemotherapy, yes=1, no=0; tumor FIGO stage Ⅰ+Ⅱ=1, Ⅲ+Ⅳ=2; age, preoperative plasma FIB and D-dimer are actual values). The ROC curve analysis shows that the AUC value of the model is 0.773, the sensitivity is 74.6%, the specificity is 71.0%, and the total prediction accuracy rate is (78+39)/167=0.701. Conclusions Age, neoadjuvant chemotherapy, tumor FIGO staging, preoperative plasma FIB and D-dimer can be used as reliable indicators to predict the occurrence of postoperative VTE in patients with ovarian cancer. The constructed prediction model has good risk prediction ability, It has certain clinical application value.


PLoS Medicine ◽  
2021 ◽  
Vol 18 (1) ◽  
pp. e1003498
Author(s):  
Luanluan Sun ◽  
Lisa Pennells ◽  
Stephen Kaptoge ◽  
Christopher P. Nelson ◽  
Scott C. Ritchie ◽  
...  

Background Polygenic risk scores (PRSs) can stratify populations into cardiovascular disease (CVD) risk groups. We aimed to quantify the potential advantage of adding information on PRSs to conventional risk factors in the primary prevention of CVD. Methods and findings Using data from UK Biobank on 306,654 individuals without a history of CVD and not on lipid-lowering treatments (mean age [SD]: 56.0 [8.0] years; females: 57%; median follow-up: 8.1 years), we calculated measures of risk discrimination and reclassification upon addition of PRSs to risk factors in a conventional risk prediction model (i.e., age, sex, systolic blood pressure, smoking status, history of diabetes, and total and high-density lipoprotein cholesterol). We then modelled the implications of initiating guideline-recommended statin therapy in a primary care setting using incidence rates from 2.1 million individuals from the Clinical Practice Research Datalink. The C-index, a measure of risk discrimination, was 0.710 (95% CI 0.703–0.717) for a CVD prediction model containing conventional risk predictors alone. Addition of information on PRSs increased the C-index by 0.012 (95% CI 0.009–0.015), and resulted in continuous net reclassification improvements of about 10% and 12% in cases and non-cases, respectively. If a PRS were assessed in the entire UK primary care population aged 40–75 years, assuming that statin therapy would be initiated in accordance with the UK National Institute for Health and Care Excellence guidelines (i.e., for persons with a predicted risk of ≥10% and for those with certain other risk factors, such as diabetes, irrespective of their 10-year predicted risk), then it could help prevent 1 additional CVD event for approximately every 5,750 individuals screened. By contrast, targeted assessment only among people at intermediate (i.e., 5% to <10%) 10-year CVD risk could help prevent 1 additional CVD event for approximately every 340 individuals screened. Such a targeted strategy could help prevent 7% more CVD events than conventional risk prediction alone. Potential gains afforded by assessment of PRSs on top of conventional risk factors would be about 1.5-fold greater than those provided by assessment of C-reactive protein, a plasma biomarker included in some risk prediction guidelines. Potential limitations of this study include its restriction to European ancestry participants and a lack of health economic evaluation. Conclusions Our results suggest that addition of PRSs to conventional risk factors can modestly enhance prediction of first-onset CVD and could translate into population health benefits if used at scale.


Author(s):  
Masaru Samura ◽  
Naoki Hirose ◽  
Takenori Kurata ◽  
Keisuke Takada ◽  
Fumio Nagumo ◽  
...  

Abstract Background In this study, we investigated the risk factors for daptomycin-associated creatine phosphokinase (CPK) elevation and established a risk score for CPK elevation. Methods Patients who received daptomycin at our hospital were classified into the normal or elevated CPK group based on their peak CPK levels during daptomycin therapy. Univariable and multivariable analyses were performed, and a risk score and prediction model for the incidence probability of CPK elevation were calculated based on logistic regression analysis. Results The normal and elevated CPK groups included 181 and 17 patients, respectively. Logistic regression analysis revealed that concomitant statin use (odds ratio [OR] 4.45, 95% confidence interval [CI] 1.40–14.47, risk score 4), concomitant antihistamine use (OR 5.66, 95% CI 1.58–20.75, risk score 4), and trough concentration (Cmin) between 20 and &lt;30 µg/mL (OR 14.48, 95% CI 2.90–87.13, risk score 5) and ≥30.0 µg/mL (OR 24.64, 95% CI 3.21–204.53, risk score 5) were risk factors for daptomycin-associated CPK elevation. The predicted incidence probabilities of CPK elevation were &lt;10% (low risk), 10%–&lt;25% (moderate risk), and ≥25% (high risk) with the total risk scores of ≤4, 5–6, and ≥8, respectively. The risk prediction model exhibited a good fit (area under the receiving-operating characteristic curve 0.85, 95% CI 0.74–0.95). Conclusions These results suggested that concomitant use of statins with antihistamines and Cmin ≥20 µg/mL were risk factors for daptomycin-associated CPK elevation. Our prediction model might aid in reducing the incidence of daptomycin-associated CPK elevation.


Circulation ◽  
2020 ◽  
Vol 142 (Suppl_3) ◽  
Author(s):  
Rachel P Dreyer ◽  
Terrence E Murphy ◽  
Valeria Raparelli ◽  
Sui Tsang ◽  
Gail Onofrio ◽  
...  

Introduction: Although readmission over the first year following hospitalization for acute myocardial infarction (AMI) is common among younger adults (18-55 yrs), there is no available risk prediction model for this age group. Existing risk models have been developed in older populations, have modest predictive ability, and exhibit methodological drawbacks. We developed a risk prediction model that considered a broad range of demographic, clinical, and psychosocial factors for readmission within 1-year of hospitalization for AMI among young adults. Methods: Young AMI adults (18-55 yrs) were enrolled from the prospective observational VIRGO study (2008-2012) of 3,572 patients. Data were obtained from medical record abstraction, interviews, and adjudicated hospitalization records. The outcome was all-cause readmission within 1-year. We used a two-stage selection process (LASSO followed by Bayesian Model Averaging) to develop a risk model. Results: The median age was 48 years (IQR: 44,52), 67.1% were women, and 20.1% were Non-white or Hispanic. Within 1-year, 906 patients (25.3%) were readmitted. Patients who were readmitted were more likely to be female, black, and had a clustering of adverse risk factors and co-morbidities. From 61 original variables considered, the final multivariable model of readmission within 1-year of discharge consisted of 14 predictors (Figure) . The model was well calibrated (Hosmer-Lemeshow P >0.05) with moderate discrimination (C statistic over 33 imputations: 0.69 development cohort). Conclusion: Adverse clinical risk factors such as diabetes, hypertension and prior AMI, but also female sex, access to specialist care, and major depression were associated with a higher risk of readmission at 1-year post AMI. This information is important to inform the development of interventions to reduce readmissions in young patients with AMI.


2015 ◽  
Vol 16 (8) ◽  
pp. 786-791 ◽  
Author(s):  
Ali Pirdavani ◽  
Ellen De Pauw ◽  
Tom Brijs ◽  
Stijn Daniels ◽  
Maarten Magis ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document