scholarly journals Routine Laboratory Blood Tests Predict SARS-CoV-2 Infection Using Machine Learning

2020 ◽  
Vol 66 (11) ◽  
pp. 1396-1404 ◽  
Author(s):  
He S Yang ◽  
Yu Hou ◽  
Ljiljana V Vasovic ◽  
Peter A D Steel ◽  
Amy Chadburn ◽  
...  

Abstract Background Accurate diagnostic strategies to identify SARS-CoV-2 positive individuals rapidly for management of patient care and protection of health care personnel are urgently needed. The predominant diagnostic test is viral RNA detection by RT-PCR from nasopharyngeal swabs specimens, however the results are not promptly obtainable in all patient care locations. Routine laboratory testing, in contrast, is readily available with a turn-around time (TAT) usually within 1-2 hours. Method We developed a machine learning model incorporating patient demographic features (age, sex, race) with 27 routine laboratory tests to predict an individual’s SARS-CoV-2 infection status. Laboratory testing results obtained within 2 days before the release of SARS-CoV-2 RT-PCR result were used to train a gradient boosting decision tree (GBDT) model from 3,356 SARS-CoV-2 RT-PCR tested patients (1,402 positive and 1,954 negative) evaluated at a metropolitan hospital. Results The model achieved an area under the receiver operating characteristic curve (AUC) of 0.854 (95% CI: 0.829-0.878). Application of this model to an independent patient dataset from a separate hospital resulted in a comparable AUC (0.838), validating the generalization of its use. Moreover, our model predicted initial SARS-CoV-2 RT-PCR positivity in 66% individuals whose RT-PCR result changed from negative to positive within 2 days. Conclusion This model employing routine laboratory test results offers opportunities for early and rapid identification of high-risk SARS-CoV-2 infected patients before their RT-PCR results are available. It may play an important role in assisting the identification of SARS-CoV-2 infected patients in areas where RT-PCR testing is not accessible due to financial or supply constraints.

Author(s):  
He Sarina Yang ◽  
Ljiljana V. Vasovic ◽  
Peter Steel ◽  
Amy Chadburn ◽  
Yu Hou ◽  
...  

AbstractBackgroundAccurate diagnostic strategies to rapidly identify SARS-CoV-2 positive individuals for management of patient care and protection of health care personnel are urgently needed. The predominant diagnostic test is viral RNA detection by RT-PCR from nasopharyngeal swabs specimens, however the results of this test are not promptly obtainable in all patient care locations. Routine laboratory testing, in contrast, is readily available with a turn-around time (TAT) usually within 1-2 hours.MethodWe developed a machine learning model incorporating patient demographic features (age, sex, race) with 27 routine laboratory tests to predict an individual’s SARS-CoV-2 infection status. Laboratory test results obtained within two days before the release of SARS-CoV-2-RT-PCR result were used to train a gradient boosted decision tree (GBDT) model from 3,346 SARS-CoV-2 RT-PCR tested patients (1,394 positive and 1,952 negative) evaluated at a large metropolitan hospital.ResultsThe model achieved an area under the receiver operating characteristic curve (AUC) of 0.853 (95% CI: 0.829-0.878). Application of this model to an independent patient dataset from a separate hospital resulted in a comparable AUC (0.838), validating the generalization of its use. Moreover, our model predicted initial SARS-CoV-2 RT-PCR positivity in 66% individuals whose RT-PCR result changed from negative to positive within two days.ConclusionThis model employing routine laboratory test results offers opportunities for early and rapid identification of high-risk SARS-COV-2 infected patients before their RT-PCR results are available. This may facilitate patient care and quarantine, indicate who requires retesting, and direct personal protective equipment use while awaiting definitive RT-PCR results.


2020 ◽  
Author(s):  
Thomas Tschoellitsch ◽  
Martin Dünser ◽  
Carl Böck ◽  
Karin Schwarzbauer ◽  
Jens Meier

Abstract Objective The diagnosis of COVID-19 is based on the detection of SARS-CoV-2 in respiratory secretions, blood, or stool. Currently, reverse transcription polymerase chain reaction (RT-PCR) is the most commonly used method to test for SARS-CoV-2. Methods In this retrospective cohort analysis, we evaluated whether machine learning could exclude SARS-CoV-2 infection using routinely available laboratory values. A Random Forests algorithm with 1353 unique features was trained to predict the RT-PCR results. Results Out of 12,848 patients undergoing SARS-CoV-2 testing, routine blood tests were simultaneously performed in 1528 patients. The machine learning model could predict SARS-CoV-2 test results with an accuracy of 86% and an area under the receiver operating characteristic curve of 0.90. Conclusion Machine learning methods can reliably predict a negative SARS-CoV-2 RT-PCR test result using standard blood tests.


2021 ◽  
Vol 8 ◽  
Author(s):  
Ruixia Cui ◽  
Wenbo Hua ◽  
Kai Qu ◽  
Heran Yang ◽  
Yingmu Tong ◽  
...  

Sepsis-associated coagulation dysfunction greatly increases the mortality of sepsis. Irregular clinical time-series data remains a major challenge for AI medical applications. To early detect and manage sepsis-induced coagulopathy (SIC) and sepsis-associated disseminated intravascular coagulation (DIC), we developed an interpretable real-time sequential warning model toward real-world irregular data. Eight machine learning models including novel algorithms were devised to detect SIC and sepsis-associated DIC 8n (1 ≤ n ≤ 6) hours prior to its onset. Models were developed on Xi'an Jiaotong University Medical College (XJTUMC) and verified on Beth Israel Deaconess Medical Center (BIDMC). A total of 12,154 SIC and 7,878 International Society on Thrombosis and Haemostasis (ISTH) overt-DIC labels were annotated according to the SIC and ISTH overt-DIC scoring systems in train set. The area under the receiver operating characteristic curve (AUROC) were used as model evaluation metrics. The eXtreme Gradient Boosting (XGBoost) model can predict SIC and sepsis-associated DIC events up to 48 h earlier with an AUROC of 0.929 and 0.910, respectively, and even reached 0.973 and 0.955 at 8 h earlier, achieving the highest performance to date. The novel ODE-RNN model achieved continuous prediction at arbitrary time points, and with an AUROC of 0.962 and 0.936 for SIC and DIC predicted 8 h earlier, respectively. In conclusion, our model can predict the sepsis-associated SIC and DIC onset up to 48 h in advance, which helps maximize the time window for early management by physicians.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Elza Rechtman ◽  
Paul Curtin ◽  
Esmeralda Navarro ◽  
Sharon Nirenberg ◽  
Megan K. Horton

AbstractTimely and effective clinical decision-making for COVID-19 requires rapid identification of risk factors for disease outcomes. Our objective was to identify characteristics available immediately upon first clinical evaluation related COVID-19 mortality. We conducted a retrospective study of 8770 laboratory-confirmed cases of SARS-CoV-2 from a network of 53 facilities in New-York City. We analysed 3 classes of variables; demographic, clinical, and comorbid factors, in a two-tiered analysis that included traditional regression strategies and machine learning. COVID-19 mortality was 12.7%. Logistic regression identified older age (OR, 1.69 [95% CI 1.66–1.92]), male sex (OR, 1.57 [95% CI 1.30–1.90]), higher BMI (OR, 1.03 [95% CI 1.102–1.05]), higher heart rate (OR, 1.01 [95% CI 1.00–1.01]), higher respiratory rate (OR, 1.05 [95% CI 1.03–1.07]), lower oxygen saturation (OR, 0.94 [95% CI 0.93–0.96]), and chronic kidney disease (OR, 1.53 [95% CI 1.20–1.95]) were associated with COVID-19 mortality. Using gradient-boosting machine learning, these factors predicted COVID-19 related mortality (AUC = 0.86) following cross-validation in a training set. Immediate, objective and culturally generalizable measures accessible upon clinical presentation are effective predictors of COVID-19 outcome. These findings may inform rapid response strategies to optimize health care delivery in parts of the world who have not yet confronted this epidemic, as well as in those forecasting a possible second outbreak.


2018 ◽  
Vol 129 (4) ◽  
pp. 675-688 ◽  
Author(s):  
Samir Kendale ◽  
Prathamesh Kulkarni ◽  
Andrew D. Rosenberg ◽  
Jing Wang

AbstractEditor’s PerspectiveWhat We Already Know about This TopicWhat This Article Tells Us That Is NewBackgroundHypotension is a risk factor for adverse perioperative outcomes. Machine-learning methods allow large amounts of data for development of robust predictive analytics. The authors hypothesized that machine-learning methods can provide prediction for the risk of postinduction hypotension.MethodsData was extracted from the electronic health record of a single quaternary care center from November 2015 to May 2016 for patients over age 12 that underwent general anesthesia, without procedure exclusions. Multiple supervised machine-learning classification techniques were attempted, with postinduction hypotension (mean arterial pressure less than 55 mmHg within 10 min of induction by any measurement) as primary outcome, and preoperative medications, medical comorbidities, induction medications, and intraoperative vital signs as features. Discrimination was assessed using cross-validated area under the receiver operating characteristic curve. The best performing model was tuned and final performance assessed using split-set validation.ResultsOut of 13,323 cases, 1,185 (8.9%) experienced postinduction hypotension. Area under the receiver operating characteristic curve using logistic regression was 0.71 (95% CI, 0.70 to 0.72), support vector machines was 0.63 (95% CI, 0.58 to 0.60), naive Bayes was 0.69 (95% CI, 0.67 to 0.69), k-nearest neighbor was 0.64 (95% CI, 0.63 to 0.65), linear discriminant analysis was 0.72 (95% CI, 0.71 to 0.73), random forest was 0.74 (95% CI, 0.73 to 0.75), neural nets 0.71 (95% CI, 0.69 to 0.71), and gradient boosting machine 0.76 (95% CI, 0.75 to 0.77). Test set area for the gradient boosting machine was 0.74 (95% CI, 0.72 to 0.77).ConclusionsThe success of this technique in predicting postinduction hypotension demonstrates feasibility of machine-learning models for predictive analytics in the field of anesthesiology, with performance dependent on model selection and appropriate tuning.


Author(s):  
Steef Kurstjens ◽  
Armando van der Horst ◽  
Robert Herpers ◽  
Mick W.L. Geerits ◽  
Yvette C.M. Kluiters-de Hingh ◽  
...  

ABSTRACTBackgroundThe novel coronavirus disease 19 (COVID-19), caused by SARS-CoV-2, spreads rapidly across the world. The exponential increase in the number of cases has resulted in overcrowding of emergency departments (ED). Detection of SARS-CoV-2 is based on an RT-PCR of nasopharyngeal swab material. However, RT-PCR testing is time-consuming and many hospitals deal with a shortage of testing materials. Therefore, we aimed to develop an algorithm to rapidly evaluate an individual’s risk of SARS-CoV-2 infection at the ED.MethodsIn this multicenter retrospective study, routine laboratory parameters (C-reactive protein, lactate dehydrogenase, ferritin, absolute neutrophil and lymphocyte counts), demographic data and the chest X-ray/CT result from 967 patients entering the ED with respiratory symptoms were collected. Using these parameters, an easy-to-use point-based algorithm, called the corona-score, was developed to discriminate between patients that tested positive for SARS-CoV-2 by RT-PCR and those testing negative. Computational sampling was used to optimize the corona-score. Validation of the model was performed using data from 592 patients.ResultsThe corona-score model yielded an area under the receiver operating characteristic curve of 0.91 in the validation population. Patients testing negative for SARS-CoV-2 showed a median corona-score of 3 versus 11 (scale 0-14) in patients testing positive for SARS-CoV-2 (p<0.001). Using cut-off values of 4 and 11 the model has a sensitivity and specificity of 96% and 95%, respectively.ConclusionThe corona-score effectively predicts SARS-CoV-2 RT-PCR outcome based on routine parameters. This algorithm provides the means for medical professionals to rapidly evaluate SARS-CoV-2 infection status of patients presenting at the ED with respiratory symptoms.


2021 ◽  
Author(s):  
Naveena Yanamala ◽  
Nanda H. Krishna ◽  
Quincy A. Hathaway ◽  
Aditya Radhakrishnan ◽  
Srinidhi Sunkara ◽  
...  

AbstractPatients with influenza and SARS-CoV2/Coronavirus disease 2019 (COVID-19) infections have different clinical course and outcomes. We developed and validated a supervised machine learning pipeline to distinguish the two viral infections using the available vital signs and demographic dataset from the first hospital/emergency room encounters of 3,883 patients who had confirmed diagnoses of influenza A/B, COVID-19 or negative laboratory test results. The models were able to achieve an area under the receiver operating characteristic curve (ROC AUC) of at least 97% using our multiclass classifier. The predictive models were externally validated on 15,697 encounters in 3,125 patients available on TrinetX database that contains patient-level data from different healthcare organizations. The influenza vs. COVID-19-positive model had an AUC of 98%, and 92% on the internal and external test sets, respectively. Our study illustrates the potentials of machine-learning models for accurately distinguishing the two viral infections. The code is made available at https://github.com/ynaveena/COVID-19-vs-Influenza and may be have utility as a frontline diagnostic tool to aid healthcare workers in triaging patients once the two viral infections start cocirculating in the communities.


2021 ◽  
Author(s):  
Shreyash Sonthalia ◽  
Muhammad Aji Muharrom ◽  
Levana L. Sani ◽  
Olivia Herlinda ◽  
Adrianna Bella ◽  
...  

The COVID-19 pandemic poses a heightened risk to health workers, especially in low- and middle-income countries such as Indonesia. Due to the limitations to implementing mass RT-PCR testing for health workers, high-performing and cost-effective methodologies must be developed to help identify COVID-19 positive health workers and protect the spearhead of the battle against the pandemic. This study aimed to investigate the application of machine learning classifiers to predict the risk of COVID-19 positivity (by RT-PCR) using data obtained from a survey specific to health workers. Machine learning tools can enhance COVID-19 screening capacity in high-risk populations such as health workers in environments where cost is a barrier to accessibility of adequate testing and screening supplies. We built two sets of COVID-19 Likelihood Meter (CLM) models: one trained on data from a broad population of health workers in Jakarta and Semarang (full model) and tested on the same, and one trained on health workers from Jakarta only (Jakarta model) and tested on an independent population of Semarang health workers. The area under the receiver-operating-characteristic curve (AUC), average precision (AP), and the Brier score (BS) were used to assess model performance. Shapley additive explanations (SHAP) were used to analyze feature importance. The final dataset for the study included 3979 health workers. For the full model, the random forest was selected as the algorithm of choice. It achieved cross-validation mean AUC of 0.818 ± 0.022 and AP of 0.449 ± 0.028 and was high performing during testing with AUC and AP of 0.831 and 0.428 respectively. The random forest model was well-calibrated with a low mean brier score of 0.122 ± 0.004. A random forest classifier was the best performing model during cross-validation for the Jakarta dataset, with AUC of 0.824 ± 0.008, AP of 0.397 ± 0.019, and BS of 0.102 ± 0.007, but the extra trees classifier was selected as the model of choice due to better generalizability to the test set. The performance of the extra trees model, when tested on the independent set of Semarang health workers, was AUC of 0.672 and AP of 0.508. Our models yielded high predictive performance and may have the potential to be utilized as both a COVID-19 screening tool and a method to identify health workers at greatest risk of COVID-19 positivity, and therefore most in need of testing.


2019 ◽  
Vol 58 (01) ◽  
pp. 031-041 ◽  
Author(s):  
Sara Rabhi ◽  
Jérémie Jakubowicz ◽  
Marie-Helene Metzger

Objective The objective of this article was to compare the performances of health care-associated infection (HAI) detection between deep learning and conventional machine learning (ML) methods in French medical reports. Methods The corpus consisted in different types of medical reports (discharge summaries, surgery reports, consultation reports, etc.). A total of 1,531 medical text documents were extracted and deidentified in three French university hospitals. Each of them was labeled as presence (1) or absence (0) of HAI. We started by normalizing the records using a list of preprocessing techniques. We calculated an overall performance metric, the F1 Score, to compare a deep learning method (convolutional neural network [CNN]) with the most popular conventional ML models (Bernoulli and multi-naïve Bayes, k-nearest neighbors, logistic regression, random forests, extra-trees, gradient boosting, support vector machines). We applied the hyperparameter Bayesian optimization for each model based on its HAI identification performances. We included the set of text representation as an additional hyperparameter for each model, using four different text representations (bag of words, term frequency–inverse document frequency, word2vec, and Glove). Results CNN outperforms all other conventional ML algorithms for HAI classification. The best F1 Score of 97.7% ± 3.6% and best area under the curve score of 99.8% ± 0.41% were achieved when CNN was directly applied to the processed clinical notes without a pretrained word2vec embedding. Through receiver operating characteristic curve analysis, we could achieve a good balance between false notifications (with a specificity equal to 0.937) and system detection capability (with a sensitivity equal to 0.962) using the Youden's index reference. Conclusions The main drawback of CNNs is their opacity. To address this issue, we investigated CNN inner layers' activation values to visualize the most meaningful phrases in a document. This method could be used to build a phrase-based medical assistant algorithm to help the infection control practitioner to select relevant medical records. Our study demonstrated that deep learning approach outperforms other classification learning algorithms for automatically identifying HAIs in medical reports.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Naveena Yanamala ◽  
Nanda H. Krishna ◽  
Quincy A. Hathaway ◽  
Aditya Radhakrishnan ◽  
Srinidhi Sunkara ◽  
...  

AbstractPatients with influenza and SARS-CoV2/Coronavirus disease 2019 (COVID-19) infections have a different clinical course and outcomes. We developed and validated a supervised machine learning pipeline to distinguish the two viral infections using the available vital signs and demographic dataset from the first hospital/emergency room encounters of 3883 patients who had confirmed diagnoses of influenza A/B, COVID-19 or negative laboratory test results. The models were able to achieve an area under the receiver operating characteristic curve (ROC AUC) of at least 97% using our multiclass classifier. The predictive models were externally validated on 15,697 encounters in 3125 patients available on TrinetX database that contains patient-level data from different healthcare organizations. The influenza vs COVID-19-positive model had an AUC of 98.8%, and 92.8% on the internal and external test sets, respectively. Our study illustrates the potentials of machine-learning models for accurately distinguishing the two viral infections. The code is made available at https://github.com/ynaveena/COVID-19-vs-Influenza and may have utility as a frontline diagnostic tool to aid healthcare workers in triaging patients once the two viral infections start cocirculating in the communities.


Sign in / Sign up

Export Citation Format

Share Document