scholarly journals Using Item Response Theory for Explainable Machine Learning in Predicting Mortality in the Intensive Care Unit: Case-Based Approach

10.2196/20268 ◽  
2020 ◽  
Vol 22 (9) ◽  
pp. e20268
Author(s):  
Adrienne Kline ◽  
Theresa Kline ◽  
Zahra Shakeri Hossein Abad ◽  
Joon Lee

Background Supervised machine learning (ML) is being featured in the health care literature with study results frequently reported using metrics such as accuracy, sensitivity, specificity, recall, or F1 score. Although each metric provides a different perspective on the performance, they remain to be overall measures for the whole sample, discounting the uniqueness of each case or patient. Intuitively, we know that all cases are not equal, but the present evaluative approaches do not take case difficulty into account. Objective A more case-based, comprehensive approach is warranted to assess supervised ML outcomes and forms the rationale for this study. This study aims to demonstrate how the item response theory (IRT) can be used to stratify the data based on how difficult each case is to classify, independent of the outcome measure of interest (eg, accuracy). This stratification allows the evaluation of ML classifiers to take the form of a distribution rather than a single scalar value. Methods Two large, public intensive care unit data sets, Medical Information Mart for Intensive Care III and electronic intensive care unit, were used to showcase this method in predicting mortality. For each data set, a balanced sample (n=8078 and n=21,940, respectively) and an imbalanced sample (n=12,117 and n=32,910, respectively) were drawn. A 2-parameter logistic model was used to provide scores for each case. Several ML algorithms were used in the demonstration to classify cases based on their health-related features: logistic regression, linear discriminant analysis, K-nearest neighbors, decision tree, naive Bayes, and a neural network. Generalized linear mixed model analyses were used to assess the effects of case difficulty strata, ML algorithm, and the interaction between them in predicting accuracy. Results The results showed significant effects (P<.001) for case difficulty strata, ML algorithm, and their interaction in predicting accuracy and illustrated that all classifiers performed better with easier-to-classify cases and that overall the neural network performed best. Significant interactions suggest that cases that fall in the most arduous strata should be handled by logistic regression, linear discriminant analysis, decision tree, or neural network but not by naive Bayes or K-nearest neighbors. Conventional metrics for ML classification have been reported for methodological comparison. Conclusions This demonstration shows that using the IRT is a viable method for understanding the data that are provided to ML algorithms, independent of outcome measures, and highlights how well classifiers differentiate cases of varying difficulty. This method explains which features are indicative of healthy states and why. It enables end users to tailor the classifier that is appropriate to the difficulty level of the patient for personalized medicine.

2020 ◽  
Author(s):  
Adrienne Kline ◽  
Theresa Kline ◽  
Zahra Shakeri Hossein Abad ◽  
Joon Lee

BACKGROUND Supervised machine learning (ML) is being featured in the health care literature with study results frequently reported using metrics such as accuracy, sensitivity, specificity, recall, or F1 score. Although each metric provides a different perspective on the performance, they remain to be overall measures for the whole sample, discounting the uniqueness of each case or patient. Intuitively, we know that all cases are not equal, but the present evaluative approaches do not take case difficulty into account. OBJECTIVE A more case-based, comprehensive approach is warranted to assess supervised ML outcomes and forms the rationale for this study. This study aims to demonstrate how the item response theory (IRT) can be used to stratify the data based on how <i>difficult</i> each case is to classify, independent of the outcome measure of interest (eg, accuracy). This stratification allows the evaluation of ML classifiers to take the form of a distribution rather than a single scalar value. METHODS Two large, public intensive care unit data sets, Medical Information Mart for Intensive Care III and electronic intensive care unit, were used to showcase this method in predicting mortality. For each data set, a balanced sample (n=8078 and n=21,940, respectively) and an imbalanced sample (n=12,117 and n=32,910, respectively) were drawn. A 2-parameter logistic model was used to provide scores for each case. Several ML algorithms were used in the demonstration to classify cases based on their health-related features: logistic regression, linear discriminant analysis, K-nearest neighbors, decision tree, naive Bayes, and a neural network. Generalized linear mixed model analyses were used to assess the effects of case difficulty strata, ML algorithm, and the interaction between them in predicting accuracy. RESULTS The results showed significant effects (<i>P</i>&lt;.001) for case difficulty strata, ML algorithm, and their interaction in predicting accuracy and illustrated that all classifiers performed better with easier-to-classify cases and that overall the neural network performed best. Significant interactions suggest that cases that fall in the most arduous strata should be handled by logistic regression, linear discriminant analysis, decision tree, or neural network but not by naive Bayes or K-nearest neighbors. Conventional metrics for ML classification have been reported for methodological comparison. CONCLUSIONS This demonstration shows that using the IRT is a viable method for understanding the data that are provided to ML algorithms, independent of outcome measures, and highlights how well classifiers differentiate cases of varying difficulty. This method explains which features are indicative of healthy states and why. It enables end users to tailor the classifier that is appropriate to the difficulty level of the patient for personalized medicine.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Matthijs Blankers ◽  
Louk F. M. van der Post ◽  
Jack J. M. Dekker

Abstract Background Accurate prediction models for whether patients on the verge of a psychiatric criseis need hospitalization are lacking and machine learning methods may help improve the accuracy of psychiatric hospitalization prediction models. In this paper we evaluate the accuracy of ten machine learning algorithms, including the generalized linear model (GLM/logistic regression) to predict psychiatric hospitalization in the first 12 months after a psychiatric crisis care contact. We also evaluate an ensemble model to optimize the accuracy and we explore individual predictors of hospitalization. Methods Data from 2084 patients included in the longitudinal Amsterdam Study of Acute Psychiatry with at least one reported psychiatric crisis care contact were included. Target variable for the prediction models was whether the patient was hospitalized in the 12 months following inclusion. The predictive power of 39 variables related to patients’ socio-demographics, clinical characteristics and previous mental health care contacts was evaluated. The accuracy and area under the receiver operating characteristic curve (AUC) of the machine learning algorithms were compared and we also estimated the relative importance of each predictor variable. The best and least performing algorithms were compared with GLM/logistic regression using net reclassification improvement analysis and the five best performing algorithms were combined in an ensemble model using stacking. Results All models performed above chance level. We found Gradient Boosting to be the best performing algorithm (AUC = 0.774) and K-Nearest Neighbors to be the least performing (AUC = 0.702). The performance of GLM/logistic regression (AUC = 0.76) was slightly above average among the tested algorithms. In a Net Reclassification Improvement analysis Gradient Boosting outperformed GLM/logistic regression by 2.9% and K-Nearest Neighbors by 11.3%. GLM/logistic regression outperformed K-Nearest Neighbors by 8.7%. Nine of the top-10 most important predictor variables were related to previous mental health care use. Conclusions Gradient Boosting led to the highest predictive accuracy and AUC while GLM/logistic regression performed average among the tested algorithms. Although statistically significant, the magnitude of the differences between the machine learning algorithms was in most cases modest. The results show that a predictive accuracy similar to the best performing model can be achieved when combining multiple algorithms in an ensemble model.


2019 ◽  
Author(s):  
Matthijs Blankers ◽  
Louk F. M. van der Post ◽  
Jack J. M. Dekker

Abstract Background: It is difficult to accurately predict whether a patient on the verge of a potential psychiatric crisis will need to be hospitalized. Machine learning may be helpful to improve the accuracy of psychiatric hospitalization prediction models. In this paper we evaluate and compare the accuracy of ten machine learning algorithms including the commonly used generalized linear model (GLM/logistic regression) to predict psychiatric hospitalization in the first 12 months after a psychiatric crisis care contact, and explore the most important predictor variables of hospitalization. Methods: Data from 2,084 patients with at least one reported psychiatric crisis care contact included in the longitudinal Amsterdam Study of Acute Psychiatry were used. The accuracy and area under the receiver operating characteristic curve (AUC) of the machine learning algorithms were compared. We also estimated the relative importance of each predictor variable. The best and least performing algorithms were compared with GLM/logistic regression using net reclassification improvement analysis. Target variable for the prediction models was whether or not the patient was hospitalized in the 12 months following inclusion in the study. The 39 predictor variables were related to patients’ socio-demographics, clinical characteristics and previous mental health care contacts. Results: We found Gradient Boosting to perform the best (AUC=0.774) and K-Nearest Neighbors performing the least (AUC=0.702). The performance of GLM/logistic regression (AUC=0.76) was above average among the tested algorithms. Gradient Boosting outperformed GLM/logistic regression and K-Nearest Neighbors, and GLM outperformed K-Nearest Neighbors in a Net Reclassification Improvement analysis, although the differences between Gradient Boosting and GLM/logistic regression were small. Nine of the top-10 most important predictor variables were related to previous mental health care use. Conclusions: Gradient Boosting led to the highest predictive accuracy and AUC while GLM/logistic regression performed average among the tested algorithms. Although statistically significant, the magnitude of the differences between the machine learning algorithms was modest. Future studies may consider to combine multiple algorithms in an ensemble model for optimal performance and to mitigate the risk of choosing suboptimal performing algorithms.


2019 ◽  
Author(s):  
Longxiang Su ◽  
Chun Liu ◽  
Dongkai Li ◽  
Jie He ◽  
Fanglan Zheng ◽  
...  

BACKGROUND Heparin is one of the most commonly used medications in intensive care units. In clinical practice, the use of a weight-based heparin dosing nomogram is standard practice for the treatment of thrombosis. Recently, machine learning techniques have dramatically improved the ability of computers to provide clinical decision support and have allowed for the possibility of computer generated, algorithm-based heparin dosing recommendations. OBJECTIVE The objective of this study was to predict the effects of heparin treatment using machine learning methods to optimize heparin dosing in intensive care units based on the predictions. Patient state predictions were based upon activated partial thromboplastin time in 3 different ranges: subtherapeutic, normal therapeutic, and supratherapeutic, respectively. METHODS Retrospective data from 2 intensive care unit research databases (Multiparameter Intelligent Monitoring in Intensive Care III, MIMIC-III; e–Intensive Care Unit Collaborative Research Database, eICU) were used for the analysis. Candidate machine learning models (random forest, support vector machine, adaptive boosting, extreme gradient boosting, and shallow neural network) were compared in 3 patient groups to evaluate the classification performance for predicting the subtherapeutic, normal therapeutic, and supratherapeutic patient states. The model results were evaluated using precision, recall, F1 score, and accuracy. RESULTS Data from the MIMIC-III database (n=2789 patients) and from the eICU database (n=575 patients) were used. In 3-class classification, the shallow neural network algorithm performed the best (F1 scores of 87.26%, 85.98%, and 87.55% for data set 1, 2, and 3, respectively). The shallow neural network algorithm achieved the highest F1 scores within the patient therapeutic state groups: subtherapeutic (data set 1: 79.35%; data set 2: 83.67%; data set 3: 83.33%), normal therapeutic (data set 1: 93.15%; data set 2: 87.76%; data set 3: 84.62%), and supratherapeutic (data set 1: 88.00%; data set 2: 86.54%; data set 3: 95.45%) therapeutic ranges, respectively. CONCLUSIONS The most appropriate model for predicting the effects of heparin treatment was found by comparing multiple machine learning models and can be used to further guide optimal heparin dosing. Using multicenter intensive care unit data, our study demonstrates the feasibility of predicting the outcomes of heparin treatment using data-driven methods, and thus, how machine learning–based models can be used to optimize and personalize heparin dosing to improve patient safety. Manual analysis and validation suggested that the model outperformed standard practice heparin treatment dosing.


2021 ◽  
Vol 8 (2) ◽  
pp. 311
Author(s):  
Mohammad Farid Naufal

<p class="Abstrak">Cuaca merupakan faktor penting yang dipertimbangkan untuk berbagai pengambilan keputusan. Klasifikasi cuaca manual oleh manusia membutuhkan waktu yang lama dan inkonsistensi. <em>Computer vision</em> adalah cabang ilmu yang digunakan komputer untuk mengenali atau melakukan klasifikasi citra. Hal ini dapat membantu pengembangan <em>self autonomous machine</em> agar tidak bergantung pada koneksi internet dan dapat melakukan kalkulasi sendiri secara <em>real time</em>. Terdapat beberapa algoritma klasifikasi citra populer yaitu K-Nearest Neighbors (KNN), Support Vector Machine (SVM), dan Convolutional Neural Network (CNN). KNN dan SVM merupakan algoritma klasifikasi dari <em>Machine Learning</em> sedangkan CNN merupakan algoritma klasifikasi dari Deep Neural Network. Penelitian ini bertujuan untuk membandingkan performa dari tiga algoritma tersebut sehingga diketahui berapa gap performa diantara ketiganya. Arsitektur uji coba yang dilakukan adalah menggunakan 5 cross validation. Beberapa parameter digunakan untuk mengkonfigurasikan algoritma KNN, SVM, dan CNN. Dari hasil uji coba yang dilakukan CNN memiliki performa terbaik dengan akurasi 0.942, precision 0.943, recall 0.942, dan F1 Score 0.942.</p><p class="Abstrak"> </p><p class="Abstrak"><em><strong>Abstract</strong></em></p><p class="Abstract"><em>Weather is an important factor that is considered for various decision making. Manual weather classification by humans is time consuming and inconsistent. Computer vision is a branch of science that computers use to recognize or classify images. This can help develop self-autonomous machines so that they are not dependent on an internet connection and can perform their own calculations in real time. There are several popular image classification algorithms, namely K-Nearest Neighbors (KNN), Support Vector Machine (SVM), and Convolutional Neural Network (CNN). KNN and SVM are Machine Learning classification algorithms, while CNN is a Deep Neural Networks classification algorithm. This study aims to compare the performance of that three algorithms so that the performance gap between the three is known. The test architecture is using 5 cross validation. Several parameters are used to configure the KNN, SVM, and CNN algorithms. From the test results conducted by CNN, it has the best performance with 0.942 accuracy, 0.943 precision, 0.942 recall, and F1 Score 0.942.</em></p><p class="Abstrak"><em><strong><br /></strong></em></p>


2021 ◽  
Vol 12 ◽  
Author(s):  
Mary M. Maleckar ◽  
Lena Myklebust ◽  
Julie Uv ◽  
Per Magne Florvaag ◽  
Vilde Strøm ◽  
...  

Background: Remodeling due to myocardial infarction (MI) significantly increases patient arrhythmic risk. Simulations using patient-specific models have shown promise in predicting personalized risk for arrhythmia. However, these are computationally- and time- intensive, hindering translation to clinical practice. Classical machine learning (ML) algorithms (such as K-nearest neighbors, Gaussian support vector machines, and decision trees) as well as neural network techniques, shown to increase prediction accuracy, can be used to predict occurrence of arrhythmia as predicted by simulations based solely on infarct and ventricular geometry. We present an initial combined image-based patient-specific in silico and machine learning methodology to assess risk for dangerous arrhythmia in post-infarct patients. Furthermore, we aim to demonstrate that simulation-supported data augmentation improves prediction models, combining patient data, computational simulation, and advanced statistical modeling, improving overall accuracy for arrhythmia risk assessment.Methods: MRI-based computational models were constructed from 30 patients 5 days post-MI (the “baseline” population). In order to assess the utility biophysical model-supported data augmentation for improving arrhythmia prediction, we augmented the virtual baseline patient population. Each patient ventricular and ischemic geometry in the baseline population was used to create a subfamily of geometric models, resulting in an expanded set of patient models (the “augmented” population). Arrhythmia induction was attempted via programmed stimulation at 17 sites for each virtual patient corresponding to AHA LV segments and simulation outcome, “arrhythmia,” or “no-arrhythmia,” were used as ground truth for subsequent statistical prediction (machine learning, ML) models. For each patient geometric model, we measured and used choice data features: the myocardial volume and ischemic volume, as well as the segment-specific myocardial volume and ischemia percentage, as input to ML algorithms. For classical ML techniques (ML), we trained k-nearest neighbors, support vector machine, logistic regression, xgboost, and decision tree models to predict the simulation outcome from these geometric features alone. To explore neural network ML techniques, we trained both a three - and a four-hidden layer multilayer perceptron feed forward neural networks (NN), again predicting simulation outcomes from these geometric features alone. ML and NN models were trained on 70% of randomly selected segments and the remaining 30% was used for validation for both baseline and augmented populations.Results: Stimulation in the baseline population (30 patient models) resulted in reentry in 21.8% of sites tested; in the augmented population (129 total patient models) reentry occurred in 13.0% of sites tested. ML and NN models ranged in mean accuracy from 0.83 to 0.86 for the baseline population, improving to 0.88 to 0.89 in all cases.Conclusion: Machine learning techniques, combined with patient-specific, image-based computational simulations, can provide key clinical insights with high accuracy rapidly and efficiently. In the case of sparse or missing patient data, simulation-supported data augmentation can be employed to further improve predictive results for patient benefit. This work paves the way for using data-driven simulations for prediction of dangerous arrhythmia in MI patients.


Author(s):  
EL Ghouch Nihad ◽  
En-Naimi El Mokhtar ◽  
Zouhair Abdelhamid ◽  
Al Achhab Mohammed

<span>The goal of adaptive learning systems is to help the learner achieve their goals and guide their learning. These systems make it possible to adapt the presentation of learning resources according to learners' needs, characteristics and learning styles, by offering them personalized courses. We propose an approach to an adaptive learning system that takes into account the initial learning profile based on Felder Silverman's learning style model in order to propose an initial learning path and the dynamic change of his behavior during the learning process using the Incremental Dynamic Case Based Reasoning approach to monitor and control its behavior in real time, based on the successful experiences of other learners, to personalize the learning. These learner experiences are grouped into homogeneous classes at the behavioral level, using the Fuzzy C-Means unsupervised machine learning method to facilitate the search for learners with similar behaviors using the supervised machine learning method K- Nearest Neighbors.</span>


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
H. L. Gururaj ◽  
Francesco Flammini ◽  
H. A. Chaya Kumari ◽  
G. R. Puneeth ◽  
B. R. Sunil Kumar

AbstractThe mechanism of action is an important aspect of drug development. It can help scientists in the process of drug discovery. This paper provides a machine learning model to predict the mechanism of action of a drug. The machine learning models used in this paper are Binary Relevance K Nearest Neighbors (Type A and Type B), Multi-label K-Nearest Neighbors and a custom neural network. These machine learning models are evaluated using the mean column-wise log loss. The custom neural network model had the best accuracy with a log loss of 0.01706. This neural network model is integrated into a web application using Flask framework. A user can upload a custom testing features dataset, which contains the gene expression and the cell viability levels. The web application will output the top classes of drugs, along with the scatter plots for each of the drug.


2021 ◽  
Vol 9 ◽  
Author(s):  
Mingyue Lu ◽  
Yadong Zhang ◽  
Zaiyang Ma ◽  
Manzhu Yu ◽  
Min Chen ◽  
...  

Lightning is an instantaneous, intense, and convective weather phenomenon that can produce great destructive power and easily cause serious economic losses and casualties. It always occurs in convective storms with small spatial scales and short life cycles. Weather radar is one of the best operational instruments that can monitor the detailed 3D structures of convective storms at high spatial and temporal resolutions. Thus, extracting the features related to lightning automatically from 3D weather radar data to identify lightning strike locations would significantly benefit future lightning predictions. This article makes a bold attempt to apply three-dimensional radar data to identify lightning strike locations, thereby laying the foundation for the subsequent accurate and real-time prediction of lightning locations. First, that issue is transformed into a binary classification problem. Then, a suitable dataset for the recognition of lightning strike locations based on 3D radar data is constructed for system training and evaluation purposes. Furthermore, the machine learning methods of a convolutional neural network, logistic regression, a random forest, and k-nearest neighbors are employed to carry out experiments. The results show that the convolutional neural network has the best performance in identifying lightning strike locations. This technique is followed by the random forest and k-nearest neighbors, and the logistic regression produces the worst manifestation.


2020 ◽  
pp. 1028-1041
Author(s):  
Junjie Bai ◽  
Kan Luo ◽  
Jun Peng ◽  
Jinliang Shi ◽  
Ying Wu ◽  
...  

Music emotions recognition (MER) is a challenging field of studies addressed in multiple disciplines such as musicology, cognitive science, physiology, psychology, arts and affective computing. In this article, music emotions are classified into four types known as those of pleasing, angry, sad and relaxing. MER is formulated as a classification problem in cognitive computing where 548 dimensions of music features are extracted and modeled. A set of classifications and machine learning algorithms are explored and comparatively studied for MER, which includes Support Vector Machine (SVM), k-Nearest Neighbors (KNN), Neuro-Fuzzy Networks Classification (NFNC), Fuzzy KNN (FKNN), Bayes classifier and Linear Discriminant Analysis (LDA). Experimental results show that the SVM, FKNN and LDA algorithms are the most effective methodologies that obtain more than 80% accuracy for MER.


Sign in / Sign up

Export Citation Format

Share Document