scholarly journals Detection of Incidental Esophageal Cancers on Chest CT by Deep Learning

2021 ◽  
Vol 11 ◽  
Author(s):  
He Sui ◽  
Ruhang Ma ◽  
Lin Liu ◽  
Yaozong Gao ◽  
Wenhai Zhang ◽  
...  

ObjectiveTo develop a deep learning-based model using esophageal thickness to detect esophageal cancer from unenhanced chest CT images.MethodsWe retrospectively identified 141 patients with esophageal cancer and 273 patients negative for esophageal cancer (at the time of imaging) for model training. Unenhanced chest CT images were collected and used to build a convolutional neural network (CNN) model for diagnosing esophageal cancer. The CNN is a VB-Net segmentation network that segments the esophagus and automatically quantifies the thickness of the esophageal wall and detect positions of esophageal lesions. To validate this model, 52 false negatives and 48 normal cases were collected further as the second dataset. The average performance of three radiologists and that of the same radiologists aided by the model were compared.ResultsThe sensitivity and specificity of the esophageal cancer detection model were 88.8% and 90.9%, respectively, for the validation dataset set. Of the 52 missed esophageal cancer cases and the 48 normal cases, the sensitivity, specificity, and accuracy of the deep learning esophageal cancer detection model were 69%, 61%, and 65%, respectively. The independent results of the radiologists had a sensitivity of 25%, 31%, and 27%; specificity of 78%, 75%, and 75%; and accuracy of 53%, 54%, and 53%. With the aid of the model, the results of the radiologists were improved to a sensitivity of 77%, 81%, and 75%; specificity of 75%, 74%, and 74%; and accuracy of 76%, 77%, and 75%, respectively.ConclusionsDeep learning-based model can effectively detect esophageal cancer in unenhanced chest CT scans to improve the incidental detection of esophageal cancer.

Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 455
Author(s):  
Hammam Alshazly ◽  
Christoph Linse ◽  
Erhardt Barth ◽  
Thomas Martinetz

This paper explores how well deep learning models trained on chest CT images can diagnose COVID-19 infected people in a fast and automated process. To this end, we adopted advanced deep network architectures and proposed a transfer learning strategy using custom-sized input tailored for each deep architecture to achieve the best performance. We conducted extensive sets of experiments on two CT image datasets, namely, the SARS-CoV-2 CT-scan and the COVID19-CT. The results show superior performances for our models compared with previous studies. Our best models achieved average accuracy, precision, sensitivity, specificity, and F1-score values of 99.4%, 99.6%, 99.8%, 99.6%, and 99.4% on the SARS-CoV-2 dataset, and 92.9%, 91.3%, 93.7%, 92.2%, and 92.5% on the COVID19-CT dataset, respectively. For better interpretability of the results, we applied visualization techniques to provide visual explanations for the models’ predictions. Feature visualizations of the learned features show well-separated clusters representing CT images of COVID-19 and non-COVID-19 cases. Moreover, the visualizations indicate that our models are not only capable of identifying COVID-19 cases but also provide accurate localization of the COVID-19-associated regions, as indicated by well-trained radiologists.


2020 ◽  
Author(s):  
Jinseok Lee

BACKGROUND The coronavirus disease (COVID-19) has explosively spread worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) can be used as a relevant screening tool owing to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely busy fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE We aimed to quickly develop an AI technique to diagnose COVID-19 pneumonia and differentiate it from non-COVID pneumonia and non-pneumonia diseases on CT. METHODS A simple 2D deep learning framework, named fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning, using one of the four state-of-art pre-trained deep learning models (VGG16, ResNet50, InceptionV3, or Xception) as a backbone. For training and testing of FCONet, we collected 3,993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and non-pneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training and a testing set at a ratio of 8:2. For the test dataset, the diagnostic performance to diagnose COVID-19 pneumonia was compared among the four pre-trained FCONet models. In addition, we tested the FCONet models on an additional external testing dataset extracted from the embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS Of the four pre-trained models of FCONet, the ResNet50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100%, and accuracy 99.87%) and outperformed the other three pre-trained models in testing dataset. In additional external test dataset using low-quality CT images, the detection accuracy of the ResNet50 model was the highest (96.97%), followed by Xception, InceptionV3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS The FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing dataset, the ResNet50-based FCONet might be the best model, as it outperformed other FCONet models based on VGG16, Xception, and InceptionV3.


2021 ◽  
Author(s):  
Indrajeet Kumar ◽  
Jyoti Rawat

Abstract The manual diagnostic tests performed in laboratories for pandemic disease such as COVID19 is time-consuming, requires skills and expertise of the performer to yield accurate results. Moreover, it is very cost ineffective as the cost of test kits is high and also requires well-equipped labs to conduct them. Thus, other means of diagnosing the patients with presence of SARS-COV2 (the virus responsible for COVID19) must be explored. A radiography method like chest CT images is one such means that can be utilized for diagnosis of COVID19. The radio-graphical changes observed in CT images of COVID19 patient helps in developing a deep learning-based method for extraction of graphical features which are then used for automated diagnosis of the disease ahead of laboratory-based testing. The proposed work suggests an Artificial Intelligence (AI) based technique for rapid diagnosis of COVID19 from given volumetric CT images of patient’s chest by extracting its visual features and then using these features in the deep learning module. The proposed convolutional neural network is deployed for classifying the infectious and non-infectious SARS-COV2 subjects. The proposed network utilizes 746 chests scanned CT images of which 349 images belong to COVID19 positive cases while remaining 397 belong negative cases of COVID19. The extensive experiment has been completed with the accuracy of 98.4 %, sensitivity of 98.5 %, the specificity of 98.3 %, the precision of 97.1 %, F1score of 97.8 %. The obtained result shows the outstanding performance for classification of infectious and non-infectious for COVID19 cases.


Author(s):  
Mostafa El Habib Daho ◽  
Amin Khouani ◽  
Mohammed El Amine Lazouni ◽  
Sidi Ahmed Mahmoudi

2020 ◽  
Vol 30 (12) ◽  
pp. 6517-6527 ◽  
Author(s):  
Qianqian Ni ◽  
Zhi Yuan Sun ◽  
Li Qi ◽  
Wen Chen ◽  
Yi Yang ◽  
...  

Diagnostics ◽  
2020 ◽  
Vol 10 (9) ◽  
pp. 608 ◽  
Author(s):  
Tomoyuki Fujioka ◽  
Marie Takahashi ◽  
Mio Mori ◽  
Junichi Tsuchiya ◽  
Emi Yamaga ◽  
...  

The purpose of this study was to use the Coronavirus Disease 2019 (COVID-19) Reporting and Data System (CO-RADS) to evaluate the chest computed tomography (CT) images of patients suspected of having COVID-19, and to investigate its diagnostic performance and interobserver agreement. The Dutch Radiological Society developed CO-RADS as a diagnostic indicator for assessing suspicion of lung involvement of COVID-19 on a scale of 1 (very low) to 5 (very high). We investigated retrospectively 154 adult patients with clinically suspected COVID-19, between April and June 2020, who underwent chest CT and reverse transcription-polymerase chain reaction (RT-PCR). The patients’ average age was 61.3 years (range, 21–93), 101 were male, and 76 were RT-PCR positive. Using CO-RADS, four radiologists evaluated the chest CT images. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were calculated. Interobserver agreement was calculated using the intraclass correlation coefficient (ICC) by comparing the individual reader’s score to the median of the remaining three radiologists. The average sensitivity was 87.8% (range, 80.2–93.4%), specificity was 66.4% (range, 51.3–84.5%), and AUC was 0.859 (range, 0.847–0.881); there was no significant difference between the readers (p > 0.200). In 325 (52.8%) of 616 observations, there was absolute agreement among observers. The average ICC of readers was 0.840 (range, 0.800–0.874; p < 0.001). CO-RADS is a categorical taxonomic evaluation scheme for COVID-19 pneumonia, using chest CT images, that provides outstanding performance and from substantial to almost perfect interobserver agreement for predicting COVID-19.


Author(s):  
T. Maria Patricia Peeris ◽  
Prof. P. Brundha

Lungs are the most crucial organs in a human body. Since the cancer detection began, lung cancer has been the most common terminal disease amongst all type of cancers. The contribution of deep learning, especially the convolution neural networks has widely reduced the mortality rates resulting from lung cancer. The classification of Computed Tomography (CT) images has enhanced the early diagnosis of lung cancer that has enabled victims to undergo treatment at an early stage. The resolution of the CT images have been variedly used for the accuracy of the model. Besides, the detection of lumps or anomalies in the images has greatly supported early diagnosis. Classification plays a vital role in the deep learning models to sort out the input images as positive and negative based on the attribute of the model built. However, the generalisation of classifiers has reduced the accuracy of the corresponding models built. To increase the accuracy and efficiency of the deep learning model, an optimised classification technique is used to predict lung cancer from the CT images. The purpose of optimisation here will enable the model to adapt stipulated feature extraction process according to the input images fed into the network. The model will be trained for predicting purpose given any resolution of the images. KEYWORDS: Lung cancer, CT images, Classification techniques, Optimised Classification, Prediction


Sign in / Sign up

Export Citation Format

Share Document