scholarly journals Detection of Pulmonary Nodules in ct Images Using Deep Learning Technique

2020 ◽  
Vol 16 (4) ◽  
pp. 568-575
Author(s):  
Santhi Balachandran ◽  
Divya ◽  
Nithya Rajendran ◽  
Brindha Giri
2020 ◽  
Vol 38 (15_suppl) ◽  
pp. 9037-9037
Author(s):  
Tao Xu ◽  
Chuoji Huang ◽  
Yaoqi Liu ◽  
Jing Gao ◽  
Huan Chang ◽  
...  

9037 Background: Lung cancer is the most common cancer worldwide. Artificial intelligence (AI) platform using deep learning algorithms have made a remarkable progress in improving diagnostic accuracy of lung cancer. But AI diagnostic performance in identifying benign and malignant pulmonary nodules still needs improvement. We aimed to validate a Pulmonary Nodules Artificial Intelligence Diagnostic System (PNAIDS) by analyzing computed tomography (CT) imaging data. Methods: This real-world, multicentre, diagnostic study was done in five different tier hospitals in China. The CT images of patients, who were aged over 18 years and never had previous anti-cancer treatments, were retrieved from participating hospitals. 534 eligible patients with 5-30mm diameter pulmonary nodules identified by CT were planning to confirm with histopathological diagnosis. The performance of PNAIDS was also compared with respiratory specialists and radiologists with expert or competent degrees of expertise as well as Mayo Clinic’s model by area under the curve (AUC) and evaluated differences by calculating the 95% CIs using the Z-test method. 11 selected participants were tested circulating genetically abnormal cells (CACs) before surgery with doctors suggested. Results: 611 lung CT images from 534 individuals were used to test PNAIDS. The diagnostic accuracy, valued by AUC, in identifying benign and malignant pulmonary nodules was 0.765 (95%CI [0.729 - 0.798]). The diagnostic sensitivity of PNAIDS is 0.630(0.579 – 0.679), specificity is 0.753 (0.693 – 0.807). PNAIDS achieved diagnostic accuracy similar to that of the expert respiratory specialists (AUC difference: 0.0036 [-0.0426 - 0.0497]; p = 0.8801) and superior when compared with Mayo Clinic’s model (0.120 [0.0649 - 0.176], p < 0·0001), expert radiologists (0.0620 [0.0124 - 0.112], p = 0.0142) and competent radiologists (0.0751 [0.0248 - 0.125], p = 0.0034). 11 selected participants were suggested negative in AI results but positive in respiratory specialists’ result. 8 of them were malignant in histopathological diagnosis with tested more than 3 CACs in their blood. Conclusions: PNAIDS achieved high diagnostic accuracy in differential diagnoses between benign and malignant pulmonary nodules, with diagnostic accuracy similar to that of expert respiratory specialists and was superior to that of Mayo Clinic’s model and radiologists. CACs may be able to assist CT-based AI in improving their effectiveness but it still need more data to be proved. Clinical trial information: ChiCTR1900026233.


2019 ◽  
Vol 37 (15_suppl) ◽  
pp. e13154-e13154
Author(s):  
Li Bai ◽  
Yanqing Zhou ◽  
Yaru Chen ◽  
Quanxing Liu ◽  
Dong Zhou ◽  
...  

e13154 Background: Many people harbor pulmonary nodules. Such nodules can be detected by low-dose computed tomography (LDCT) during regular physical examinations. If a pulmonary nodule is small (i.e. < 10mm), it is very difficult to diagnose whether it is benign or malignant using CT images alone. To address this problem, we developed a method based on liquid biopsy and deep learning to improve diagnostic accuracy of pulmonary nodules. Methods: Thirty-eight patientsharboring one or more small pulmonary nodules were enrolled in this study. Twenty-nine patients were diagnosed as having cancer (stage I = 21, stage II = 1, stage III = 3, stage IV = 4) using tissue biopsy, while the other 9 patients were diagnosed as having benign tumors or lung diseases other than cancer. For each patient, a blood sample was obtained prior to biopsy, and the cell free DNA (cfDNA) was sequenced using a 451-gene panel to a depth of 20,000×. The unique molecular identifiers (UMI) technique was applied to reduce false positives. Seventeen patients also had full-resolution CT images available. A deep learning system primarily based on deep convolutional neural networks (CNN) was used to analyze these CT images. Results: Sequence analysis of blood samples revealed that 75.8% (22/29) of cancer patients had detectable cancer related mutations, and only 1 of 9 (11.1%) non-cancer patient was found to carry a TP53 mutation. The most frequent mutations seen in cancer patients involved genes TP53 (N = 11), EGFR (N = 7), and KRAS (N = 3) with mutant allele fractions varying from 0.08% to 74.77%. Deep learning analysis of the 17 available CT images correctly identified cancers in 88.2% (15/17) of patients. However, by combining the liquid biopsy and image analysis results, all 17 patients were correctly diagnosed. Conclusions: Deep learning-based analysis of CT images can be applied to early diagnosis of lung cancers; but the accuracy of image analysis, when used alone, is only moderate. Diagnostic accuracy can be greatly improved using liquid biopsy as an auxiliary method in patients with pulmonary nodules.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 67300-67309
Author(s):  
Gai Li ◽  
Wei Zhou ◽  
Weibin Chen ◽  
Fengtao Sun ◽  
Yu Fu ◽  
...  

2020 ◽  
Author(s):  
Jinseok Lee

BACKGROUND The coronavirus disease (COVID-19) has explosively spread worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) can be used as a relevant screening tool owing to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely busy fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE We aimed to quickly develop an AI technique to diagnose COVID-19 pneumonia and differentiate it from non-COVID pneumonia and non-pneumonia diseases on CT. METHODS A simple 2D deep learning framework, named fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning, using one of the four state-of-art pre-trained deep learning models (VGG16, ResNet50, InceptionV3, or Xception) as a backbone. For training and testing of FCONet, we collected 3,993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and non-pneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training and a testing set at a ratio of 8:2. For the test dataset, the diagnostic performance to diagnose COVID-19 pneumonia was compared among the four pre-trained FCONet models. In addition, we tested the FCONet models on an additional external testing dataset extracted from the embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS Of the four pre-trained models of FCONet, the ResNet50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100%, and accuracy 99.87%) and outperformed the other three pre-trained models in testing dataset. In additional external test dataset using low-quality CT images, the detection accuracy of the ResNet50 model was the highest (96.97%), followed by Xception, InceptionV3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS The FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing dataset, the ResNet50-based FCONet might be the best model, as it outperformed other FCONet models based on VGG16, Xception, and InceptionV3.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jared Hamwood ◽  
Beat Schmutz ◽  
Michael J. Collins ◽  
Mark C. Allenby ◽  
David Alonso-Caneiro

AbstractThis paper proposes a fully automatic method to segment the inner boundary of the bony orbit in two different image modalities: magnetic resonance imaging (MRI) and computed tomography (CT). The method, based on a deep learning architecture, uses two fully convolutional neural networks in series followed by a graph-search method to generate a boundary for the orbit. When compared to human performance for segmentation of both CT and MRI data, the proposed method achieves high Dice coefficients on both orbit and background, with scores of 0.813 and 0.975 in CT images and 0.930 and 0.995 in MRI images, showing a high degree of agreement with a manual segmentation by a human expert. Given the volumetric characteristics of these imaging modalities and the complexity and time-consuming nature of the segmentation of the orbital region in the human skull, it is often impractical to manually segment these images. Thus, the proposed method provides a valid clinical and research tool that performs similarly to the human observer.


Sign in / Sign up

Export Citation Format

Share Document