scholarly journals Machine learning-based diagnosis for disseminated intravascular coagulation (DIC): Development, external validation, and comparison to scoring systems

PLoS ONE ◽  
2018 ◽  
Vol 13 (5) ◽  
pp. e0195861 ◽  
Author(s):  
Jihoon G. Yoon ◽  
JoonNyung Heo ◽  
Minkyu Kim ◽  
Yu Jin Park ◽  
Min Hyuk Choi ◽  
...  
2019 ◽  
Vol 25 ◽  
pp. 107602961983505 ◽  
Author(s):  
Kazuma Yamakawa ◽  
Yutaka Umemura ◽  
Shuhei Murao ◽  
Mineji Hayakawa ◽  
Satoshi Fujimi

Optimizing diagnostic criteria to detect specific patients likely to benefit from anticoagulants is warranted. A cutoff of 5 points for the International Society on Thrombosis and Haemostasis overt disseminated intravascular coagulation (DIC) scoring system was determined in the original article, but its validity was not evaluated. This study aimed to explore the optimal cutoff points of DIC scoring systems and evaluate the effectiveness of early intervention with anticoagulants. We used a nationwide retrospective registry of consecutive adult patients with sepsis in Japan to develop simulated survival data, assuming anticoagulants were conducted strictly according to each cutoff point. Estimated treatment effects of anticoagulants for in-hospital mortality and risk of bleeding were calculated by logistic regression analysis with inverse probability of treatment weighting using propensity scoring. Of 2663 patients with sepsis, 1247 patients received anticoagulants and 1416 none. The simulation model showed no increase in estimated mortality between 0 and 3 cutoff points, whereas at ≥4 cutoff points, mortality increased linearly. The estimated bleeding tended to decrease in accordance with the increase in cutoff points. The optimal cutoff for determining anticoagulant therapy may be 3 points to minimize nonsurvival with acceptable bleeding complications. The findings of the present study suggested a beneficial association of early intervention with anticoagulant therapy and mortality in the patients with sepsis-induced DIC. Present cutoff points of DIC scoring systems may be suboptimal for determining the start of anticoagulant therapy and delay its initiation.


2022 ◽  
Vol 8 ◽  
Author(s):  
Jinzhang Li ◽  
Ming Gong ◽  
Yashutosh Joshi ◽  
Lizhong Sun ◽  
Lianjun Huang ◽  
...  

BackgroundAcute renal failure (ARF) is the most common major complication following cardiac surgery for acute aortic syndrome (AAS) and worsens the postoperative prognosis. Our aim was to establish a machine learning prediction model for ARF occurrence in AAS patients.MethodsWe included AAS patient data from nine medical centers (n = 1,637) and analyzed the incidence of ARF and the risk factors for postoperative ARF. We used data from six medical centers to compare the performance of four machine learning models and performed internal validation to identify AAS patients who developed postoperative ARF. The area under the curve (AUC) of the receiver operating characteristic (ROC) curve was used to compare the performance of the predictive models. We compared the performance of the optimal machine learning prediction model with that of traditional prediction models. Data from three medical centers were used for external validation.ResultsThe eXtreme Gradient Boosting (XGBoost) algorithm performed best in the internal validation process (AUC = 0.82), which was better than both the logistic regression (LR) prediction model (AUC = 0.77, p < 0.001) and the traditional scoring systems. Upon external validation, the XGBoost prediction model (AUC =0.81) also performed better than both the LR prediction model (AUC = 0.75, p = 0.03) and the traditional scoring systems. We created an online application based on the XGBoost prediction model.ConclusionsWe have developed a machine learning model that has better predictive performance than traditional LR prediction models as well as other existing risk scoring systems for postoperative ARF. This model can be utilized to provide early warnings when high-risk patients are found, enabling clinicians to take prompt measures.


Author(s):  
Lusha W. Liang ◽  
Michael A. Fifer ◽  
Kohei Hasegawa ◽  
Mathew S. Maurer ◽  
Muredach P. Reilly ◽  
...  

Background - Genetic testing can determine family screening strategies and has prognostic and diagnostic value in hypertrophic cardiomyopathy (HCM). However, it can also pose a significant psychosocial burden. Conventional scoring systems offer modest ability to predict genotype positivity. The aim of our study was to develop a novel prediction model for genotype positivity in patients with HCM by applying machine learning (ML) algorithms. Methods - We constructed three ML models using readily available clinical and cardiac imaging data of 102 patients from Columbia University with HCM who had undergone genetic testing (the training set). We validated model performance on 76 patients with HCM from Massachusetts General Hospital (the test set). Within the test set, we compared the area under the receiver operating characteristic curves (AUCs) for the ML models against the AUCs generated by the Toronto HCM Genotype Score ("the Toronto score") and Mayo HCM Genotype Predictor ("the Mayo score") using the Delong test and net reclassification improvement (NRI). Results - Overall, 63 of the 178 patients (35%) were genotype positive. The random forest ML model developed in the training set demonstrated an AUC of 0.92 (95% CI 0.85-0.99) in predicting genotype positivity in the test set, significantly outperforming the Toronto score (AUC 0.77, 95% CI 0.65-0.90, p=0.004, NRI: p<0.001) and the Mayo score (AUC 0.79, 95% CI 0.67-0.92, p=0.01, NRI: p=0.001). The gradient boosted decision tree ML model also achieved significant NRI over the Toronto score (p<0.001) and the Mayo score (p=0.03), with an AUC of 0.87 (95% CI 0.75-0.99). Compared to the Toronto and Mayo scores, all three ML models had higher sensitivity, positive predictive value, and negative predictive value. Conclusions - Our ML models demonstrated a superior ability to predict genotype positivity in patients with HCM compared to conventional scoring systems in an external validation test set.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Julie Helms ◽  
◽  
François Severac ◽  
Hamid Merdji ◽  
Raphaël Clere-Jehl ◽  
...  

2020 ◽  
Vol 9 (7) ◽  
pp. 2113
Author(s):  
Daisuke Hasegawa ◽  
Kazuma Yamakawa ◽  
Kazuki Nishida ◽  
Naoki Okada ◽  
Shuhei Murao ◽  
...  

Sepsis-induced coagulopathy has poor prognosis; however, there is no established tool for predicting it. We aimed to create predictive models for coagulopathy progression using machine-learning techniques to evaluate predictive accuracies of machine-learning and conventional techniques. A post-hoc subgroup analysis was conducted based on the Japan Septic Disseminated Intravascular Coagulation retrospective study. We used the International Society on Thrombosis and Haemostasis disseminated intravascular coagulation (DIC) score to calculate the ΔDIC score as ((DIC score on Day 3) − (DIC score on Day 1)). The primary outcome was to determine whether the predictive accuracy of ΔDIC was more than 0. The secondary outcome was the actual predictive accuracy of ΔDIC (predicted ΔDIC−real ΔDIC). We used the machine-learning methods, such as random forests (RF), support vector machines (SVM), and neural networks (NN); their predictive accuracies were compared with those of conventional methods. In total, 1017 patients were included. Regarding DIC progression, predictive accuracy of the multiple linear regression, RF, SVM, and NN models was 63.7%, 67.0%, 64.4%, and 59.8%, respectively. The difference between predicted ΔDIC and real ΔDIC was 2.05, 1.54, 2.24, and 1.77 for the multiple linear regression, RF, SVM, and NN models, respectively. RF had the highest predictive accuracy.


2019 ◽  
Vol 50 ◽  
pp. 23-30 ◽  
Author(s):  
Shinjiro Saito ◽  
Shigehiko Uchino ◽  
Mineji Hayakawa ◽  
Kazuma Yamakawa ◽  
Daisuke Kudo ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document