scholarly journals Review on Diagnosis of COVID-19 from Chest CT Images Using Artificial Intelligence

2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Ilker Ozsahin ◽  
Boran Sekeroglu ◽  
Musa Sani Musa ◽  
Mubarak Taiwo Mustapha ◽  
Dilber Uzun Ozsahin

The COVID-19 diagnostic approach is mainly divided into two broad categories, a laboratory-based and chest radiography approach. The last few months have witnessed a rapid increase in the number of studies use artificial intelligence (AI) techniques to diagnose COVID-19 with chest computed tomography (CT). In this study, we review the diagnosis of COVID-19 by using chest CT toward AI. We searched ArXiv, MedRxiv, and Google Scholar using the terms “deep learning”, “neural networks”, “COVID-19”, and “chest CT”. At the time of writing (August 24, 2020), there have been nearly 100 studies and 30 studies among them were selected for this review. We categorized the studies based on the classification tasks: COVID-19/normal, COVID-19/non-COVID-19, COVID-19/non-COVID-19 pneumonia, and severity. The sensitivity, specificity, precision, accuracy, area under the curve, and F1 score results were reported as high as 100%, 100%, 99.62, 99.87%, 100%, and 99.5%, respectively. However, the presented results should be carefully compared due to the different degrees of difficulty of different classification tasks.

2020 ◽  
Vol 6 (4) ◽  
pp. 00079-2020
Author(s):  
Masahiro Nemoto ◽  
Kei Nakashima ◽  
Satoshi Noma ◽  
Yuya Matsue ◽  
Kazuki Yoshida ◽  
...  

BackgroundChest computed tomography (CT) is commonly used to diagnose pneumonia in Japan, but its usability in terms of prognostic predictability is not obvious. We modified CURB-65 (confusion, urea >7 mmol·L−1, respiratory rate ≥30 breaths·min−1, blood pressure <90 mmHg (systolic) ≤60 mmHg (diastolic), age ≥65 years) and A-DROP scores with CT information and evaluated their ability to predict mortality in community-acquired pneumonia patients.MethodsThis study was conducted using a prospective registry of the Adult Pneumonia Study Group – Japan. Of the 791 registry patients, 265 hospitalised patients with chest CT were evaluated. Chest CT-modified CURB-65 scores were developed with the first 30 study patients. The 30-day mortality predictability of CT-modified, chest radiography-modified and original CURB-65 scores were validated.ResultsIn score development, infiltrates over four lobes and pleural effusion on CT added extra points to CURB-65 scores. The area under the curve for CT-modified CURB-65 scores was significantly higher than that of chest radiography-modified or original CURB-65 scores (both p<0.001). The optimal cut-off CT-modified CURB-65 score was ≥4 (positive-predictive value 80.8%; negative-predictive value 78.6%, for 30-day mortality). For sensitivity analyses, chest CT-modified A-DROP scores also demonstrated better prognostic value than did chest radiography-modified and original A-DROP scores. Poor physical status, chronic heart failure and multiple infiltration hampered chest radiography evaluation.ConclusionChest CT modification of CURB-65 or A-DROP scores improved the prognostic predictability relative to the unmodified scores. In particular, in patients with poor physical status or chronic heart failure, CT findings have a significant advantage. Therefore, CT can be used to enhance prognosis prediction.


2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Min Liu ◽  
Shimin Wang ◽  
Hu Chen ◽  
Yunsong Liu

Abstract Background Recently, there has been considerable innovation in artificial intelligence (AI) for healthcare. Convolutional neural networks (CNNs) show excellent object detection and classification performance. This study assessed the accuracy of an artificial intelligence (AI) application for the detection of marginal bone loss on periapical radiographs. Methods A Faster region-based convolutional neural network (R-CNN) was trained. Overall, 1670 periapical radiographic images were divided into training (n = 1370), validation (n = 150), and test (n = 150) datasets. The system was evaluated in terms of sensitivity, specificity, the mistake diagnostic rate, the omission diagnostic rate, and the positive predictive value. Kappa (κ) statistics were compared between the system and dental clinicians. Results Evaluation metrics of AI system is equal to resident dentist. The agreement between the AI system and expert is moderate to substantial (κ = 0.547 and 0.568 for bone loss sites and bone loss implants, respectively) for detecting marginal bone loss around dental implants. Conclusions This AI system based on Faster R-CNN analysis of periapical radiographs is a highly promising auxiliary diagnostic tool for peri-implant bone loss detection.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Malte Seemann ◽  
Lennart Bargsten ◽  
Alexander Schlaefer

AbstractDeep learning methods produce promising results when applied to a wide range of medical imaging tasks, including segmentation of artery lumen in computed tomography angiography (CTA) data. However, to perform sufficiently, neural networks have to be trained on large amounts of high quality annotated data. In the realm of medical imaging, annotations are not only quite scarce but also often not entirely reliable. To tackle both challenges, we developed a two-step approach for generating realistic synthetic CTA data for the purpose of data augmentation. In the first step moderately realistic images are generated in a purely numerical fashion. In the second step these images are improved by applying neural domain adaptation. We evaluated the impact of synthetic data on lumen segmentation via convolutional neural networks (CNNs) by comparing resulting performances. Improvements of up to 5% in terms of Dice coefficient and 20% for Hausdorff distance represent a proof of concept that the proposed augmentation procedure can be used to enhance deep learning-based segmentation for artery lumen in CTA images.


Endoscopy ◽  
2020 ◽  
Author(s):  
Alanna Ebigbo ◽  
Robert Mendel ◽  
Tobias Rückert ◽  
Laurin Schuster ◽  
Andreas Probst ◽  
...  

Background and aims: The accurate differentiation between T1a and T1b Barrett’s cancer has both therapeutic and prognostic implications but is challenging even for experienced physicians. We trained an Artificial Intelligence (AI) system on the basis of deep artificial neural networks (deep learning) to differentiate between T1a and T1b Barrett’s cancer white-light images. Methods: Endoscopic images from three tertiary care centres in Germany were collected retrospectively. A deep learning system was trained and tested using the principles of cross-validation. A total of 230 white-light endoscopic images (108 T1a and 122 T1b) was evaluated with the AI-system. For comparison, the images were also classified by experts specialized in endoscopic diagnosis and treatment of Barrett’s cancer. Results: The sensitivity, specificity, F1 and accuracy of the AI-system in the differentiation between T1a and T1b cancer lesions was 0.77, 0.64, 0.73 and 0.71, respectively. There was no statistically significant difference between the performance of the AI-system and that of human experts with sensitivity, specificity, F1 and accuracy of 0.63, 0.78, 0.67 and 0.70 respectively. Conclusion: This pilot study demonstrates the first multicenter application of an AI-based system in the prediction of submucosal invasion in endoscopic images of Barrett’s cancer. AI scored equal to international experts in the field, but more work is necessary to improve the system and apply it to video sequences and in a real-life setting. Nevertheless, the correct prediction of submucosal invasion in Barret´s cancer remains challenging for both experts and AI.


Author(s):  
Shimaa Farghaly ◽  
Marwa Makboul

Abstract Background Coronavirus disease 2019 (COVID-19) is the most recent global health emergency; early diagnosis of COVID-19 is very important for rapid clinical interventions and patient isolation; chest computed tomography (CT) plays an important role in screening, diagnosis, and evaluating the progress of the disease. According to the results of different studies, due to high severity of the disease, clinicians should be aware of the different potential risk factors associated with the fatal outcome, so chest CT severity scoring system was designed for semi-quantitative assessment of the severity of lung disease in COVID-19 patients, ranking the pulmonary involvement on 25 points severity scale according to extent of lung abnormalities; this study aims to evaluate retrospectively the relationship between age and severity of COVID-19 in both sexes based on chest CT severity scoring system. Results Age group C (40–49 year) was the commonest age group that was affected by COVID-19 by 21.3%, while the least affected group was group F (≥ 70 years) by only 6.4%. As regards COVID-RADS classification, COVID-RADS-3 was the most commonly presented at both sexes in all different age groups. Total CT severity lung score had a positive strong significant correlation with the age of the patient (r = 0.64, P < 0.001). Also, a positive strong significant correlation was observed between CT severity lung score and age in both males and females (r = 0.59, P < 0.001) and (r = 0.69, P < 0.001) respectively. Conclusion We concluded that age can be considered as a significant risk factor for the severity of COVID-19 in both sexes. Also, CT can be used as a significant diagnostic tool for the diagnosis of COVID-19 and evaluation of the progression and severity of the disease.


2021 ◽  
Vol 20 ◽  
pp. 153303382110163
Author(s):  
Danju Huang ◽  
Han Bai ◽  
Li Wang ◽  
Yu Hou ◽  
Lan Li ◽  
...  

With the massive use of computers, the growth and explosion of data has greatly promoted the development of artificial intelligence (AI). The rise of deep learning (DL) algorithms, such as convolutional neural networks (CNN), has provided radiation oncologists with many promising tools that can simplify the complex radiotherapy process in the clinical work of radiation oncology, improve the accuracy and objectivity of diagnosis, and reduce the workload, thus enabling clinicians to spend more time on advanced decision-making tasks. As the development of DL gets closer to clinical practice, radiation oncologists will need to be more familiar with its principles to properly evaluate and use this powerful tool. In this paper, we explain the development and basic concepts of AI and discuss its application in radiation oncology based on different task categories of DL algorithms. This work clarifies the possibility of further development of DL in radiation oncology.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Andre Esteva ◽  
Katherine Chou ◽  
Serena Yeung ◽  
Nikhil Naik ◽  
Ali Madani ◽  
...  

AbstractA decade of unprecedented progress in artificial intelligence (AI) has demonstrated the potential for many fields—including medicine—to benefit from the insights that AI techniques can extract from data. Here we survey recent progress in the development of modern computer vision techniques—powered by deep learning—for medical applications, focusing on medical imaging, medical video, and clinical deployment. We start by briefly summarizing a decade of progress in convolutional neural networks, including the vision tasks they enable, in the context of healthcare. Next, we discuss several example medical imaging applications that stand to benefit—including cardiology, pathology, dermatology, ophthalmology–and propose new avenues for continued work. We then expand into general medical video, highlighting ways in which clinical workflows can integrate computer vision to enhance care. Finally, we discuss the challenges and hurdles required for real-world clinical deployment of these technologies.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Lara Lloret Iglesias ◽  
Pablo Sanz Bellón ◽  
Amaia Pérez del Barrio ◽  
Pablo Menéndez Fernández-Miranda ◽  
David Rodríguez González ◽  
...  

AbstractDeep learning is nowadays at the forefront of artificial intelligence. More precisely, the use of convolutional neural networks has drastically improved the learning capabilities of computer vision applications, being able to directly consider raw data without any prior feature extraction. Advanced methods in the machine learning field, such as adaptive momentum algorithms or dropout regularization, have dramatically improved the convolutional neural networks predicting ability, outperforming that of conventional fully connected neural networks. This work summarizes, in an intended didactic way, the main aspects of these cutting-edge techniques from a medical imaging perspective.


2021 ◽  
Vol 11 ◽  
Author(s):  
He Sui ◽  
Ruhang Ma ◽  
Lin Liu ◽  
Yaozong Gao ◽  
Wenhai Zhang ◽  
...  

ObjectiveTo develop a deep learning-based model using esophageal thickness to detect esophageal cancer from unenhanced chest CT images.MethodsWe retrospectively identified 141 patients with esophageal cancer and 273 patients negative for esophageal cancer (at the time of imaging) for model training. Unenhanced chest CT images were collected and used to build a convolutional neural network (CNN) model for diagnosing esophageal cancer. The CNN is a VB-Net segmentation network that segments the esophagus and automatically quantifies the thickness of the esophageal wall and detect positions of esophageal lesions. To validate this model, 52 false negatives and 48 normal cases were collected further as the second dataset. The average performance of three radiologists and that of the same radiologists aided by the model were compared.ResultsThe sensitivity and specificity of the esophageal cancer detection model were 88.8% and 90.9%, respectively, for the validation dataset set. Of the 52 missed esophageal cancer cases and the 48 normal cases, the sensitivity, specificity, and accuracy of the deep learning esophageal cancer detection model were 69%, 61%, and 65%, respectively. The independent results of the radiologists had a sensitivity of 25%, 31%, and 27%; specificity of 78%, 75%, and 75%; and accuracy of 53%, 54%, and 53%. With the aid of the model, the results of the radiologists were improved to a sensitivity of 77%, 81%, and 75%; specificity of 75%, 74%, and 74%; and accuracy of 76%, 77%, and 75%, respectively.ConclusionsDeep learning-based model can effectively detect esophageal cancer in unenhanced chest CT scans to improve the incidental detection of esophageal cancer.


Sign in / Sign up

Export Citation Format

Share Document