scholarly journals Machine Learning Can Predict Deaths in Patients with Diverticulitis During their Hospital Stay

Author(s):  
Fahad Shabbir Ahmed ◽  
Raza-Ul-Mustafa ◽  
Liaqat Ali ◽  
Imad-ud-Deen ◽  
Tahir Hameed ◽  
...  

ABSTRACTIntroductionDiverticulitis is the inflammation and/or infection of small pouches known as diverticula that develop along the walls of the intestines. Patients with diverticulitis are at risk of mortality as high as 17% with abscess formation and 45% with secondary perforation, especially patients that get admitted to the inpatient services are at risk of complications including mortality. We developed a deep neural networks (DNN) based machine learning framework that could predict premature death in patients that are admitted with diverticulitis using electronic health records (EHR) to calculate the statistically significant risk factors first and then to apply deep neural network.MethodsOur proposed framework (Deep FLAIM) is a two-phase hybrid works framework. In the first phase, we used National In-patient Sample 2014 dataset to extract patients with diverticulitis patients with and without hemorrhage with the ICD-9 codes 562.11 and 562.13 respectively and analyzed these patients for different risk factors for statistical significance with univariate and multivariate analyses to generate hazard ratios, to rank the diverticulitis associated risk factors. In the second phase, we applied deep neural network model to predict death. Additionally, we have compared the performance of our proposed system by using the popular machine learning models such as DNN and Logistic Regression (LR).ResultsA total of 128,258 patients were used, we tested 64 different variables for using univariate and multivariate (age, gender and ethnicity) cox-regression for significance only 16 factors were statistically significant for both univariate and multivariate analysis. The mortality prediction for our DNN out-performed the conventional machine learning (logistic regression) in terms of AUC (0.977 vs 0.904), training accuracy (0.931 vs 0.900), testing accuracy (0.930 vs 0.910), sensitivity (90% vs 88%) and specificity (95% vs 93%).ConclusionOur Deep FLAIM Framework can predict mortality in patients admitted to the hospital with diverticulitis with high accuracy. The proposed framework can be expanded to predict premature death for other disease.

2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Divneet Mandair ◽  
Premanand Tiwari ◽  
Steven Simon ◽  
Kathryn L. Colborn ◽  
Michael A. Rosenberg

Abstract Background With cardiovascular disease increasing, substantial research has focused on the development of prediction tools. We compare deep learning and machine learning models to a baseline logistic regression using only ‘known’ risk factors in predicting incident myocardial infarction (MI) from harmonized EHR data. Methods Large-scale case-control study with outcome of 6-month incident MI, conducted using the top 800, from an initial 52 k procedures, diagnoses, and medications within the UCHealth system, harmonized to the Observational Medical Outcomes Partnership common data model, performed on 2.27 million patients. We compared several over- and under- sampling techniques to address the imbalance in the dataset. We compared regularized logistics regression, random forest, boosted gradient machines, and shallow and deep neural networks. A baseline model for comparison was a logistic regression using a limited set of ‘known’ risk factors for MI. Hyper-parameters were identified using 10-fold cross-validation. Results Twenty thousand Five hundred and ninety-one patients were diagnosed with MI compared with 2.25 million who did not. A deep neural network with random undersampling provided superior classification compared with other methods. However, the benefit of the deep neural network was only moderate, showing an F1 Score of 0.092 and AUC of 0.835, compared to a logistic regression model using only ‘known’ risk factors. Calibration for all models was poor despite adequate discrimination, due to overfitting from low frequency of the event of interest. Conclusions Our study suggests that DNN may not offer substantial benefit when trained on harmonized data, compared to traditional methods using established risk factors for MI.


Mathematics ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1620 ◽  
Author(s):  
Ganjar Alfian ◽  
Muhammad Syafrudin ◽  
Norma Latif Fitriyani ◽  
Muhammad Anshari ◽  
Pavel Stasa ◽  
...  

Extracting information from individual risk factors provides an effective way to identify diabetes risk and associated complications, such as retinopathy, at an early stage. Deep learning and machine learning algorithms are being utilized to extract information from individual risk factors to improve early-stage diagnosis. This study proposes a deep neural network (DNN) combined with recursive feature elimination (RFE) to provide early prediction of diabetic retinopathy (DR) based on individual risk factors. The proposed model uses RFE to remove irrelevant features and DNN to classify the diseases. A publicly available dataset was utilized to predict DR during initial stages, for the proposed and several current best-practice models. The proposed model achieved 82.033% prediction accuracy, which was a significantly better performance than the current models. Thus, important risk factors for retinopathy can be successfully extracted using RFE. In addition, to evaluate the proposed prediction model robustness and generalization, we compared it with other machine learning models and datasets (nephropathy and hypertension–diabetes). The proposed prediction model will help improve early-stage retinopathy diagnosis based on individual risk factors.


Author(s):  
Nuriel S. Mor ◽  
Kathryn L. Dardeck

Forecasting the risk for mental disorders from early ecological information holds benefits for the individual and society. Computational models used in psychological research, however, are barriers to making such predictions at the individual level. Preexposure identification of future soldiers at risk for posttraumatic stress disorder (PTSD) and other individuals, such as humanitarian aid workers and journalists intending to be potentially exposed to traumatic events, is important for guiding decisions about exposure. The purpose of the present study was to evaluate a machine learning approach to identify individuals at risk for PTSD using readily collected ecological risk factors, which makes scanning a large population possible. An exhaustive literature review was conducted to identify multiple ecological risk factors for PTSD. A questionnaire assessing these factors was designed and distributed among residents of southern Israel who have been exposed to terror attacks; data were collected from 1,290 residents. A neural network classification algorithm was used to predict the likelihood of a PTSD diagnosis. Assessed by cross-validation, the prediction of PTSD diagnostic status yielded a mean area under receiver operating characteristics curve of .91 (<em>F </em>score = .83). This study is a novel attempt to implement a neural network classification algorithm using ecological risk factors to predict potential risk for PTSD. Preexposure identification of future soldiers and other individuals at risk for PTSD from a large population of candidates is feasible using machine learning methods and readily collected ecological factors.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0249856
Author(s):  
Scott R. Shuldiner ◽  
Michael V. Boland ◽  
Pradeep Y. Ramulu ◽  
C. Gustavo De Moraes ◽  
Tobias Elze ◽  
...  

Objective To assess whether machine learning algorithms (MLA) can predict eyes that will undergo rapid glaucoma progression based on an initial visual field (VF) test. Design Retrospective analysis of longitudinal data. Subjects 175,786 VFs (22,925 initial VFs) from 14,217 patients who completed ≥5 reliable VFs at academic glaucoma centers were included. Methods Summary measures and reliability metrics from the initial VF and age were used to train MLA designed to predict the likelihood of rapid progression. Additionally, the neural network model was trained with point-wise threshold data in addition to summary measures, reliability metrics and age. 80% of eyes were used for a training set and 20% were used as a test set. MLA test set performance was assessed using the area under the receiver operating curve (AUC). Performance of models trained on initial VF data alone was compared to performance of models trained on data from the first two VFs. Main outcome measures Accuracy in predicting future rapid progression defined as MD worsening more than 1 dB/year. Results 1,968 eyes (8.6%) underwent rapid progression. The support vector machine model (AUC 0.72 [95% CI 0.70–0.75]) most accurately predicted rapid progression when trained on initial VF data. Artificial neural network, random forest, logistic regression and naïve Bayes classifiers produced AUC of 0.72, 0.70, 0.69, 0.68 respectively. Models trained on data from the first two VFs performed no better than top models trained on the initial VF alone. Based on the odds ratio (OR) from logistic regression and variable importance plots from the random forest model, older age (OR: 1.41 per 10 year increment [95% CI: 1.34 to 1.08]) and higher pattern standard deviation (OR: 1.31 per 5-dB increment [95% CI: 1.18 to 1.46]) were the variables in the initial VF most strongly associated with rapid progression. Conclusions MLA can be used to predict eyes at risk for rapid progression with modest accuracy based on an initial VF test. Incorporating additional clinical data to the current model may offer opportunities to predict patients most likely to rapidly progress with even greater accuracy.


2020 ◽  
Author(s):  
Maleeha Naseem ◽  
Hajra Arshad ◽  
Syeda Amrah Hashimi ◽  
Furqan Irfan ◽  
Fahad Shabbir Ahmed

ABSTRACTBackgroundThe second wave of COVID-19 pandemic is anticipated to be worse than the initial one and will strain the healthcare systems even more during the winter months. Our aim was to develop a machine learning-based model to predict mortality using the deep learning Neo-V framework. We hypothesized this novel machine learning approach could be applied to COVID-19 patients to predict mortality successfully with high accuracy.MethodsThe current Deep-Neo-V model is built on our previously statistically rigorous machine learning framework [Fahad-Liaqat-Ahmad Intensive Machine (FLAIM) framework] that evaluated statistically significant risk factors, generated new combined variables and then supply these risk factors to deep neural network to predict mortality in RT-PCR positive COVID-19 patients in the inpatient setting. We analyzed adult patients (≥18 years) admitted to the Aga Khan University Hospital, Pakistan with a working diagnosis of COVID-19 infection (n=1228). We excluded patients that were negative on COVID-19 on RT-PCR, had incomplete or missing health records. The first phase selection of risk factor was done using Cox-regression univariate and multivariate analyses. In the second phase, we generated new variables and tested those statistically significant for mortality and in the third and final phase we applied deep neural networks and other traditional machine learning models like Decision Tree Model, k-nearest neighbor models and others.ResultsA total of 1228 cases were diagnosed as COVID-19 infection, we excluded 14 patients after the exclusion criteria and (n=)1214 patients were analyzed. We observed that several clinical and laboratory-based variables were statistically significant for both univariate and multivariate analyses while others were not. With most significant being septic shock (hazard ratio [HR], 4.30; 95% confidence interval [CI], 2.91-6.37), supportive treatment (HR, 3.51; 95% CI, 2.01-6.14), abnormal international normalized ratio (INR) (HR, 3.24; 95% CI, 2.28-4.63), admission to the intensive care unit (ICU) (HR, 3.24; 95% CI, 2.22-4.74), treatment with invasive ventilation (HR, 3.21; 95% CI, 2.15-4.79) and laboratory lymphocytic derangement (HR, 2.79; 95% CI, 1.6-4.86). Machine learning results showed our DNN (Neo-V) model outperformed all conventional machine learning models with test set accuracy of 99.53%, sensitivity of 89.87%, and specificity of 95.63%; positive predictive value, 50.00%; negative predictive value, 91.05%; and area under the curve of the receiver-operator curve of 88.5.ConclusionOur novel Deep-Neo-V model outperformed all other machine learning models. The model is easy to implement, user friendly and with high accuracy.


2021 ◽  
Vol 11 (6) ◽  
pp. 541
Author(s):  
Jin-Woo Kim ◽  
Jeong Yee ◽  
Sang-Hyeon Oh ◽  
Sun-Hyun Kim ◽  
Sun-Jong Kim ◽  
...  

Objective: This nested case–control study aimed to investigate the effects of VEGFA polymorphisms on the development of bisphosphonate-related osteonecrosis of the jaw (BRONJ) in women with osteoporosis. Methods: Eleven single nucleotide polymorphisms (SNPs) of the VEGFA were assessed in a total of 125 patients. Logistic regression was performed for multivariable analysis. Machine learning algorithms, namely, fivefold cross-validated multivariate logistic regression, elastic net, random forest, and support vector machine, were developed to predict risk factors for BRONJ occurrence. Area under the receiver-operating curve (AUROC) analysis was conducted to assess clinical performance. Results: The VEGFA rs881858 was significantly associated with BRONJ development. The odds of BRONJ development were 6.45 times (95% CI, 1.69–24.65) higher among carriers of the wild-type rs881858 allele compared with variant homozygote carriers after adjusting for covariates. Additionally, variant homozygote (GG) carriers of rs10434 had higher odds than those with wild-type allele (OR, 3.16). Age ≥ 65 years (OR, 16.05) and bisphosphonate exposure ≥ 36 months (OR, 3.67) were also significant risk factors for BRONJ occurrence. AUROC values were higher than 0.78 for all machine learning methods employed in this study. Conclusion: Our study showed that the BRONJ occurrence was associated with VEGFA polymorphisms in osteoporotic women.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Chayakrit Krittanawong ◽  
Hafeez Ul Hassan Virk ◽  
Anirudh Kumar ◽  
Mehmet Aydar ◽  
Zhen Wang ◽  
...  

AbstractMachine learning (ML) and deep learning (DL) can successfully predict high prevalence events in very large databases (big data), but the value of this methodology for risk prediction in smaller cohorts with uncommon diseases and infrequent events is uncertain. The clinical course of spontaneous coronary artery dissection (SCAD) is variable, and no reliable methods are available to predict mortality. Based on the hypothesis that machine learning (ML) and deep learning (DL) techniques could enhance the identification of patients at risk, we applied a deep neural network to information available in electronic health records (EHR) to predict in-hospital mortality in patients with SCAD. We extracted patient data from the EHR of an extensive urban health system and applied several ML and DL models using candidate clinical variables potentially associated with mortality. We partitioned the data into training and evaluation sets with cross-validation. We estimated model performance based on the area under the receiver-operator characteristics curve (AUC) and balanced accuracy. As sensitivity analyses, we examined results limited to cases with complete clinical information available. We identified 375 SCAD patients of which mortality during the index hospitalization was 11.5%. The best-performing DL algorithm identified in-hospital mortality with AUC 0.98 (95% CI 0.97–0.99), compared to other ML models (P < 0.0001). For prediction of mortality using ML models in patients with SCAD, the AUC ranged from 0.50 with the random forest method (95% CI 0.41–0.58) to 0.95 with the AdaBoost model (95% CI 0.93–0.96), with intermediate performance using logistic regression, decision tree, support vector machine, K-nearest neighbors, and extreme gradient boosting methods. A deep neural network model was associated with higher predictive accuracy and discriminative power than logistic regression or ML models for identification of patients with ACS due to SCAD prone to early mortality.


Circulation ◽  
2019 ◽  
Vol 140 (Suppl_2) ◽  
Author(s):  
Tomohisa Seki ◽  
Tomoyoshi Tamura ◽  
Kazuhiko Ohe ◽  
Masaru Suzuki

Background: Outcome prediction for patients with out-of-hospital cardiac arrest (OHCA) using prehospital information has been one of the major challenges in resuscitation medicine. Recently, machine learning techniques have been shown to be highly effective in predicting outcomes using clinical registries. In this study, we aimed to establish a prediction model for outcomes of OHCA of presumed cardiac cause using machine learning techniques. Methods: We analyzed data from the All-Japan Utstein Registry of the Fire and Disaster Management Agency between 2005 and 2016. Of 1,423,338 cases, data of OHCA patients aged ≥18 years with presumed cardiac etiology were retrieved and divided into two groups: training set, n = 584,748 (between 2005 and 2013) and test set, n = 223,314 (between 2014 and 2016). The endpoints were neurologic outcome at 1-month and survival at 1-month. Of 47 variables evaluated during the prehospital course, 19 variables (e.g.,sex, age, ECG waveform, and practice of bystander CPR) were used for outcome prediction. Performances of logistic regression, random forests, and deep neural network were examined in this study. Results: For prediction of neurologic outcomes (cerebral performance category 1 or 2) using the test set, the generated models showed area under the receiver operating characteristic curve (AUROC) values of 0.942 (95% confidence interval [CI] 0.941-0.943), 0.947 (95% CI 0.946-0.948), and 0.948 (95% CI 0.948-0.950) in logistic regression, random forest, and deep neural network, respectively. For survival prediction, the generated models showed AUROC values of 0.901 (95% CI 0.900-0.902), 0.913 (95% CI 0.912-0.914), and 0.912 (95% CI 0.911-0.913) in logistic regression, random forest, and deep neural network, respectively. Conclusions: Machine learning techniques using prehospital variables showed favorable prediction capability for 1-month neurologic outcome and survival in OHCA of presumed cardiac cause.


2020 ◽  
Vol 8 (10) ◽  
pp. 766
Author(s):  
Dohan Oh ◽  
Julia Race ◽  
Selda Oterkus ◽  
Bonguk Koo

Mechanical damage is recognized as a problem that reduces the performance of oil and gas pipelines and has been the subject of continuous research. The artificial neural network in the spotlight recently is expected to be another solution to solve the problems relating to the pipelines. The deep neural network, which is on the basis of artificial neural network algorithm and is a method amongst various machine learning methods, is applied in this study. The applicability of machine learning techniques such as deep neural network for the prediction of burst pressure has been investigated for dented API 5L X-grade pipelines. To this end, supervised learning is employed, and the deep neural network model has four layers with three hidden layers, and the neural network uses the fully connected layer. The burst pressure computed by deep neural network model has been compared with the results of finite element analysis based parametric study, and the burst pressure calculated by the experimental results. According to the comparison results, it showed good agreement. Therefore, it is concluded that deep neural networks can be another solution for predicting the burst pressure of API 5L X-grade dented pipelines.


2019 ◽  
Vol 10 (36) ◽  
pp. 8374-8383 ◽  
Author(s):  
Mohammad Atif Faiz Afzal ◽  
Aditya Sonpal ◽  
Mojtaba Haghighatlari ◽  
Andrew J. Schultz ◽  
Johannes Hachmann

Computational pipeline for the accelerated discovery of organic materials with high refractive index via high-throughput screening and machine learning.


Sign in / Sign up

Export Citation Format

Share Document