scholarly journals Applications of Artificial Intelligence and Machine Learning in Diagnosis and Prognosis of COVID-19 infection: A systematic review

2021 ◽  
Vol 10 (1) ◽  
pp. 93
Author(s):  
Mahdieh Montazeri ◽  
Ali Afraz ◽  
Mitra Montazeri ◽  
Sadegh Nejatzadeh ◽  
Fatemeh Rahimi ◽  
...  

Introduction: Our aim in this study was to summarize information on the use of intelligent models for predicting and diagnosing the Coronavirus disease 2019 (COVID-19) to help early and timely diagnosis of the disease.Material and Methods: A systematic literature search included articles published until 20 April 2020 in PubMed, Web of Science, IEEE, ProQuest, Scopus, bioRxiv, and medRxiv databases. The search strategy consisted of two groups of keywords: A) Novel coronavirus, B) Machine learning. Two reviewers independently assessed original papers to determine eligibility for inclusion in this review. Studies were critically reviewed for risk of bias using prediction model risk of bias assessment tool.Results: We gathered 1650 articles through database searches. After the full-text assessment 31 articles were included. Neural networks and deep neural network variants were the most popular machine learning type. Of the five models that authors claimed were externally validated, we considered external validation only for four of them. Area under the curve (AUC) in internal validation of prognostic models varied from .94 to .97. AUC in diagnostic models varied from 0.84 to 0.99, and AUC in external validation of diagnostic models varied from 0.73 to 0.94. Our analysis finds all but two studies have a high risk of bias due to various reasons like a low number of participants and lack of external validation.Conclusion: Diagnostic and prognostic models for COVID-19 show good to excellent discriminative performance. However, these models are at high risk of bias because of various reasons like a low number of participants and lack of external validation. Future studies should address these concerns. Sharing data and experiences for the development, validation, and updating of COVID-19 related prediction models is needed. 

2022 ◽  
pp. 1-11
Author(s):  
Andrew S. Moriarty ◽  
Nicholas Meader ◽  
Kym I. E. Snell ◽  
Richard D. Riley ◽  
Lewis W. Paton ◽  
...  

Background Relapse and recurrence of depression are common, contributing to the overall burden of depression globally. Accurate prediction of relapse or recurrence while patients are well would allow the identification of high-risk individuals and may effectively guide the allocation of interventions to prevent relapse and recurrence. Aims To review prognostic models developed to predict the risk of relapse, recurrence, sustained remission, or recovery in adults with remitted major depressive disorder. Method We searched the Cochrane Library (current issue); Ovid MEDLINE (1946 onwards); Ovid Embase (1980 onwards); Ovid PsycINFO (1806 onwards); and Web of Science (1900 onwards) up to May 2021. We included development and external validation studies of multivariable prognostic models. We assessed risk of bias of included studies using the Prediction model risk of bias assessment tool (PROBAST). Results We identified 12 eligible prognostic model studies (11 unique prognostic models): 8 model development-only studies, 3 model development and external validation studies and 1 external validation-only study. Multiple estimates of performance measures were not available and meta-analysis was therefore not necessary. Eleven out of the 12 included studies were assessed as being at high overall risk of bias and none examined clinical utility. Conclusions Due to high risk of bias of the included studies, poor predictive performance and limited external validation of the models identified, presently available clinical prediction models for relapse and recurrence of depression are not yet sufficiently developed for deploying in clinical settings. There is a need for improved prognosis research in this clinical area and future studies should conform to best practice methodological and reporting guidelines.


BMJ ◽  
2020 ◽  
pp. m1328 ◽  
Author(s):  
Laure Wynants ◽  
Ben Van Calster ◽  
Gary S Collins ◽  
Richard D Riley ◽  
Georg Heinze ◽  
...  

Abstract Objective To review and appraise the validity and usefulness of published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of becoming infected with covid-19 or being admitted to hospital with the disease. Design Living systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group. Data sources PubMed and Embase through Ovid, arXiv, medRxiv, and bioRxiv up to 5 May 2020. Study selection Studies that developed or validated a multivariable covid-19 related prediction model. Data extraction At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool). Results 14 217 titles were screened, and 107 studies describing 145 prediction models were included. The review identified four models for identifying people at risk in the general population; 91 diagnostic models for detecting covid-19 (60 were based on medical imaging, nine to diagnose disease severity); and 50 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequently reported predictors of diagnosis and prognosis of covid-19 are age, body temperature, lymphocyte count, and lung imaging features. Flu-like symptoms and neutrophil count are frequently predictive in diagnostic models, while comorbidities, sex, C reactive protein, and creatinine are frequent prognostic factors. C index estimates ranged from 0.73 to 0.81 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.68 to 0.99 in prognostic models. All models were rated at high risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and vague reporting. Most reports did not include any description of the study population or intended use of the models, and calibration of the model predictions was rarely assessed. Conclusion Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that proposed models are poorly reported, at high risk of bias, and their reported performance is probably optimistic. Hence, we do not recommend any of these reported prediction models for use in current practice. Immediate sharing of well documented individual participant data from covid-19 studies and collaboration are urgently needed to develop more rigorous prediction models, and validate promising ones. The predictors identified in included models should be considered as candidate predictors for new models. Methodological guidance should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, studies should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline. Systematic review registration Protocol https://osf.io/ehc47/ , registration https://osf.io/wy245 . Readers’ note This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 2 of the original article published on 7 April 2020 ( BMJ 2020;369:m1328), and previous updates can be found as data supplements ( https://www.bmj.com/content/369/bmj.m1328/related#datasupp ).


Author(s):  
Laure Wynants ◽  
Ben Van Calster ◽  
Marc MJ Bonten ◽  
Gary S Collins ◽  
Thomas PA Debray ◽  
...  

AbstractObjectiveTo review and critically appraise published and preprint reports of models that aim to predict either (i) presence of existing COVID-19 infection, (ii) future complications in individuals already diagnosed with COVID-19, or (iii) models to identify individuals at high risk for COVID-19 in the general population.DesignRapid systematic review and critical appraisal of prediction models for diagnosis or prognosis of COVID-19 infection.Data sourcesPubMed, EMBASE via Ovid, Arxiv, medRxiv and bioRxiv until 24th March 2020.Study selectionStudies that developed or validated a multivariable COVID-19 related prediction model. Two authors independently screened titles, abstracts and full text.Data extractionData from included studies were extracted independently by at least two authors based on the CHARMS checklist, and risk of bias was assessed using PROBAST. Data were extracted on various domains including the participants, predictors, outcomes, data analysis, and prediction model performance.Results2696 titles were screened. Of these, 27 studies describing 31 prediction models were included for data extraction and critical appraisal. We identified three models to predict hospital admission from pneumonia and other events (as a proxy for covid-19 pneumonia) in the general population; 18 diagnostic models to detect COVID-19 infection in symptomatic individuals (13 of which were machine learning utilising computed tomography (CT) results); and ten prognostic models for predicting mortality risk, progression to a severe state, or length of hospital stay. Only one of these studies used data on COVID-19 cases outside of China. Most reported predictors of presence of COVID-19 in suspected patients included age, body temperature, and signs and symptoms. Most reported predictors of severe prognosis in infected patients included age, sex, features derived from CT, C-reactive protein, lactic dehydrogenase, and lymphocyte count.Estimated C-index estimates for the prediction models ranged from 0.73 to 0.81 in those for the general population (reported for all 3 general population models), from 0.81 to > 0.99 in those for diagnosis (reported for 13 of the 18 diagnostic models), and from 0.85 to 0.98 in those for prognosis (reported for 6 of the 10 prognostic models). All studies were rated at high risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, and poor statistical analysis, including high risk of model overfitting. Reporting quality varied substantially between studies. A description of the study population and intended use of the models was absent in almost all reports, and calibration of predictions was rarely assessed.ConclusionCOVID-19 related prediction models are quickly entering the academic literature, to support medical decision making at a time where this is urgently needed. Our review indicates proposed models are poorly reported and at high risk of bias. Thus, their reported performance is likely optimistic and using them to support medical decision making is not advised. We call for immediate sharing of the individual participant data from COVID-19 studies to support collaborative efforts in building more rigorously developed prediction models and validating (evaluating) existing models. The aforementioned predictors identified in multiple included studies could be considered as candidate predictors for new models. We also stress the need to follow methodological guidance when developing and validating prediction models, as unreliable predictions may cause more harm than benefit when used to guide clinical decisions. Finally, studies should adhere to the TRIPOD statement to facilitate validating, appraising, advocating and clinically using the reported models.Systematic review registration protocolosf.io/ehc47/, registration: osf.io/wy245Summary boxesWhat is already known on this topic-The sharp recent increase in COVID-19 infections has put a strain on healthcare systems worldwide, necessitating efficient early detection, diagnosis of patients suspected of the infection and prognostication of COVID-19 confirmed cases.-Viral nucleic acid testing and chest CT are standard methods for diagnosing COVID-19, but are time-consuming.-Earlier reports suggest that the elderly, patients with comorbidity (COPD, cardiovascular disease, hypertension), and patients presenting with dyspnoea are vulnerable to more severe morbidity and mortality after COVID-19 infection.What this study adds-We identified three models to predict hospital admission from pneumonia and other events (as a proxy for COVID-19 pneumonia) in the general population.-We identified 18 diagnostic models for COVID-19 detection in symptomatic patients.-13 of these were machine learning models based on CT images.-We identified ten prognostic models for COVID-19 infected patients, of which six aimed to predict mortality risk in confirmed or suspected COVID-19 patients, two aimed to predict progression to a severe or critical state, and two aimed to predict a hospital stay of more than 10 days from admission.-Included studies were poorly reported compromising their subsequent appraisal, and recommendation for use in daily practice. All studies were appraised at high risk of bias, raising concern that the models may be flawed and perform poorly when applied in practice, such that their predictions may be unreliable.


BMJ Open ◽  
2021 ◽  
Vol 11 (5) ◽  
pp. e044170
Author(s):  
Gustav Valentin Gade ◽  
Martin Grønbech Jørgensen ◽  
Jesper Ryg ◽  
Johannes Riis ◽  
Katja Thomsen ◽  
...  

ObjectiveTo systematically review and critically appraise prognostic models for falls in community-dwelling older adults.Eligibility criteriaProspective cohort studies with any follow-up period. Studies had to develop or validate multifactorial prognostic models for falls in community-dwelling older adults (60+ years). Models had to be applicable for screening in a general population setting.Information sourceMEDLINE, EMBASE, CINAHL, The Cochrane Library, PsycINFO and Web of Science for studies published in English, Danish, Norwegian or Swedish until January 2020. Sources also included trial registries, clinical guidelines, reference lists of included papers, along with contacting clinical experts to locate published studies.Data extraction and risk of biasTwo authors performed all review stages independently. Data extraction followed the Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modelling Studies checklist. Risk of bias assessments on participants, predictors, outcomes and analysis methods followed Prediction study Risk Of Bias Assessment Tool.ResultsAfter screening 11 789 studies, 30 were eligible for inclusion (n=86 369 participants). Median age of participants ranged from 67.5 to 83.0 years. Falls incidences varied from 5.9% to 59%. Included studies reported 69 developed and three validated prediction models. Most frequent falls predictors were prior falls, age, sex, measures of gait, balance and strength, along with vision and disability. The area under the curve was available for 40 (55.6%) models, ranging from 0.49 to 0.87. Validated models’ The area under the curve ranged from 0.62 to 0.69. All models had a high risk of bias, mostly due to limitations in statistical methods, outcome assessments and restrictive eligibility criteria.ConclusionsAn abundance of prognostic models on falls risk have been developed, but with a wide range in discriminatory performance. All models exhibited a high risk of bias rendering them unreliable for prediction in clinical practice. Future prognostic prediction models should comply with recent recommendations such as Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis.PROSPERO registration numberCRD42019124021.


Author(s):  
Jacopo Burrello ◽  
Martina Amongero ◽  
Fabrizio Buffolo ◽  
Elisa Sconfienza ◽  
Vittorio Forestiero ◽  
...  

Abstract Context The diagnostic work-up of primary aldosteronism (PA) includes screening and confirmation steps. Case confirmation is time-consuming, expensive, and there is no consensus on tests and thresholds to be used. Diagnostic algorithms to avoid confirmatory testing may be useful for the management of patients with PA. Objective Development and validation of diagnostic models to confirm or exclude PA diagnosis in patients with a positive screening test. Design, Patients and Setting We evaluated 1,024 patients who underwent confirmatory testing for PA. The diagnostic models were developed in a training cohort (n=522), and then tested on an internal validation cohort (n=174) and on an independent external prospective cohort (n=328). Main outcome measure Different diagnostic models and a 16-point score were developed by machine learning and regression analysis to discriminate patients with a confirmed diagnosis of PA. Results Male sex, antihypertensive medication, plasma renin activity, aldosterone, potassium levels and presence of organ damage were associated with a confirmed diagnosis of PA. Machine learning based models displayed an accuracy of 72.9-83.9%. The Primary Aldosteronism Confirmatory Testing (PACT) score correctly classified 84.1% at training and 83.9% or 81.1% at internal and external validation, respectively. A flow chart employing the PACT score to select patients for confirmatory testing, correctly managed all patients, and resulted in a 22.8% reduction in the number of confirmatory tests. Conclusions The integration of diagnostic modelling algorithms in clinical practice may improve the management of patients with PA by circumventing unnecessary confirmatory testing.


2022 ◽  
Vol 8 ◽  
Author(s):  
Jinzhang Li ◽  
Ming Gong ◽  
Yashutosh Joshi ◽  
Lizhong Sun ◽  
Lianjun Huang ◽  
...  

BackgroundAcute renal failure (ARF) is the most common major complication following cardiac surgery for acute aortic syndrome (AAS) and worsens the postoperative prognosis. Our aim was to establish a machine learning prediction model for ARF occurrence in AAS patients.MethodsWe included AAS patient data from nine medical centers (n = 1,637) and analyzed the incidence of ARF and the risk factors for postoperative ARF. We used data from six medical centers to compare the performance of four machine learning models and performed internal validation to identify AAS patients who developed postoperative ARF. The area under the curve (AUC) of the receiver operating characteristic (ROC) curve was used to compare the performance of the predictive models. We compared the performance of the optimal machine learning prediction model with that of traditional prediction models. Data from three medical centers were used for external validation.ResultsThe eXtreme Gradient Boosting (XGBoost) algorithm performed best in the internal validation process (AUC = 0.82), which was better than both the logistic regression (LR) prediction model (AUC = 0.77, p < 0.001) and the traditional scoring systems. Upon external validation, the XGBoost prediction model (AUC =0.81) also performed better than both the LR prediction model (AUC = 0.75, p = 0.03) and the traditional scoring systems. We created an online application based on the XGBoost prediction model.ConclusionsWe have developed a machine learning model that has better predictive performance than traditional LR prediction models as well as other existing risk scoring systems for postoperative ARF. This model can be utilized to provide early warnings when high-risk patients are found, enabling clinicians to take prompt measures.


2021 ◽  
Vol 8 ◽  
Author(s):  
Ming-Hui Hung ◽  
Ling-Chieh Shih ◽  
Yu-Ching Wang ◽  
Hsin-Bang Leu ◽  
Po-Hsun Huang ◽  
...  

Objective: This study aimed to develop machine learning-based prediction models to predict masked hypertension and masked uncontrolled hypertension using the clinical characteristics of patients at a single outpatient visit.Methods: Data were derived from two cohorts in Taiwan. The first cohort included 970 hypertensive patients recruited from six medical centers between 2004 and 2005, which were split into a training set (n = 679), a validation set (n = 146), and a test set (n = 145) for model development and internal validation. The second cohort included 416 hypertensive patients recruited from a single medical center between 2012 and 2020, which was used for external validation. We used 33 clinical characteristics as candidate variables to develop models based on logistic regression (LR), random forest (RF), eXtreme Gradient Boosting (XGboost), and artificial neural network (ANN).Results: The four models featured high sensitivity and high negative predictive value (NPV) in internal validation (sensitivity = 0.914–1.000; NPV = 0.853–1.000) and external validation (sensitivity = 0.950–1.000; NPV = 0.875–1.000). The RF, XGboost, and ANN models showed much higher area under the receiver operating characteristic curve (AUC) (0.799–0.851 in internal validation, 0.672–0.837 in external validation) than the LR model. Among the models, the RF model, composed of 6 predictor variables, had the best overall performance in both internal and external validation (AUC = 0.851 and 0.837; sensitivity = 1.000 and 1.000; specificity = 0.609 and 0.580; NPV = 1.000 and 1.000; accuracy = 0.766 and 0.721, respectively).Conclusion: An effective machine learning-based predictive model that requires data from a single clinic visit may help to identify masked hypertension and masked uncontrolled hypertension.


2021 ◽  
Author(s):  
Jamie L. Miller ◽  
Masafumi Tada ◽  
Michihiko Goto ◽  
Nicholas Mohr ◽  
Sangil Lee

ABSTRACTBackgroundThroughout 2020, the coronavirus disease 2019 (COVID-19) has become a threat to public health on national and global level. There has been an immediate need for research to understand the clinical signs and symptoms of COVID-19 that can help predict deterioration including mechanical ventilation, organ support, and death. Studies thus far have addressed the epidemiology of the disease, common presentations, and susceptibility to acquisition and transmission of the virus; however, an accurate prognostic model for severe manifestations of COVID-19 is still needed because of the limited healthcare resources available.ObjectiveThis systematic review aims to evaluate published reports of prediction models for severe illnesses caused COVID-19.MethodsSearches were developed by the primary author and a medical librarian using an iterative process of gathering and evaluating terms. Comprehensive strategies, including both index and keyword methods, were devised for PubMed and EMBASE. The data of confirmed COVID-19 patients from randomized control studies, cohort studies, and case-control studies published between January 2020 and July 2020 were retrieved. Studies were independently assessed for risk of bias and applicability using the Prediction Model Risk Of Bias Assessment Tool (PROBAST). We collected study type, setting, sample size, type of validation, and outcome including intubation, ventilation, any other type of organ support, or death. The combination of the prediction model, scoring system, performance of predictive models, and geographic locations were summarized.ResultsA primary review found 292 articles relevant based on title and abstract. After further review, 246 were excluded based on the defined inclusion and exclusion criteria. Forty-six articles were included in the qualitative analysis. Inter observer agreement on inclusion was 0.86 (95% confidence interval: 0.79 - 0.93). When the PROBAST tool was applied, 44 of the 46 articles were identified to have high or unclear risk of bias, or high or unclear concern for applicability. Two studied reported prediction models, 4C Mortality Score from hospital data and QCOVID from general public data from UK, and were rated as low risk of bias and low concerns for applicability.ConclusionSeveral prognostic models are reported in the literature, but many of them had concerning risks of biases and applicability. For most of the studies, caution is needed before use, as many of them will require external validation before dissemination. However, two articles were found to have low risk of bias and low applicability can be useful tools.


Author(s):  
Jet M. J. Vonk ◽  
Jacoba P. Greving ◽  
Vilmundur Gudnason ◽  
Lenore J. Launer ◽  
Mirjam I. Geerlings

AbstractWe aimed to evaluate the external performance of prediction models for all-cause dementia or AD in the general population, which can aid selection of high-risk individuals for clinical trials and prevention. We identified 17 out of 36 eligible published prognostic models for external validation in the population-based AGES-Reykjavik Study. Predictive performance was assessed with c statistics and calibration plots. All five models with a c statistic > .75 (.76–.81) contained cognitive testing as a predictor, while all models with lower c statistics (.67–.75) did not. Calibration ranged from good to poor across all models, including systematic risk overestimation or overestimation for particularly the highest risk group. Models that overestimate risk may be acceptable for exclusion purposes, but lack the ability to accurately identify individuals at higher dementia risk. Both updating existing models or developing new models aimed at identifying high-risk individuals, as well as more external validation studies of dementia prediction models are warranted.


2021 ◽  
Vol 10 ◽  
Author(s):  
Yannan Bai ◽  
Yuane Lian ◽  
Xiaoping Chen ◽  
Jiayi Wu ◽  
Jianlin Lai ◽  
...  

Hepatocellular carcinoma (HCC) is the third most lethal cancer worldwide; however, accurate prognostic tools are still lacking. We aimed to identify immunohistochemistry (IHC)-based signature as a prognostic classifier to predict recurrence and survival in patients with HCC at Barcelona Clinic Liver Cancer (BCLC) early- and immediate-stage. In total, 567 patients who underwent curative liver resection at two independent centers were enrolled. The least absolute shrinkage and selection operator regression model was used to identify significant IHC features, and penalized Cox regression was used to further narrow down the features in the training cohort (n = 201). The candidate IHC features were validated in internal (n = 101) and external validation cohorts (n = 265). Three IHC features, hepatocyte paraffin antigen 1, CD34, and Ki-67, were identified as candidate predictors for recurrence-free survival (RFS), and were used to categorize patients into low- and high-risk recurrence groups in the training cohort (P < 0.001). The discriminative performance of the 3-IHC_based classifier was validated using internal and external cohorts (P < 0.001). Furthermore, we developed a 3-IHC_based nomogram integrating the BCLC stage, microvascular invasion, and 3-IHC_based classifier to predict 2- and 5-year RFS in the training cohort; this nomogram exhibited acceptable area under the curve values for the training, internal validation, and external validation cohorts (2-year: 0.817, 0.787, and 0.810; 5-year: 0.726, 0.662, and 0.715; respectively). The newly developed 3-IHC_based classifier can effectively predict recurrence and survival in patients with early- and intermediate-stage HCC after curative liver resection.


Sign in / Sign up

Export Citation Format

Share Document