Multiplanar analysis for pulmonary nodule classification in CT images using deep convolutional neural network and generative adversarial networks

Author(s):  
Yuya Onishi ◽  
Atsushi Teramoto ◽  
Masakazu Tsujimoto ◽  
Tetsuya Tsukamoto ◽  
Kuniaki Saito ◽  
...  
2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Yuya Onishi ◽  
Atsushi Teramoto ◽  
Masakazu Tsujimoto ◽  
Tetsuya Tsukamoto ◽  
Kuniaki Saito ◽  
...  

Lung cancer is a leading cause of death worldwide. Although computed tomography (CT) examinations are frequently used for lung cancer diagnosis, it can be difficult to distinguish between benign and malignant pulmonary nodules on the basis of CT images alone. Therefore, a bronchoscopic biopsy may be conducted if malignancy is suspected following CT examinations. However, biopsies are highly invasive, and patients with benign nodules may undergo many unnecessary biopsies. To prevent this, an imaging diagnosis with high classification accuracy is essential. In this study, we investigate the automated classification of pulmonary nodules in CT images using a deep convolutional neural network (DCNN). We use generative adversarial networks (GANs) to generate additional images when only small amounts of data are available, which is a common problem in medical research, and evaluate whether the classification accuracy is improved by generating a large amount of new pulmonary nodule images using the GAN. Using the proposed method, CT images of 60 cases with confirmed pathological diagnosis by biopsy are analyzed. The benign nodules assessed in this study are difficult for radiologists to differentiate because they cannot be rejected as being malignant. A volume of interest centered on the pulmonary nodule is extracted from the CT images, and further images are created using axial sections and augmented data. The DCNN is trained using nodule images generated by the GAN and then fine-tuned using the actual nodule images to allow the DCNN to distinguish between benign and malignant nodules. This pretraining and fine-tuning process makes it possible to distinguish 66.7% of benign nodules and 93.9% of malignant nodules. These results indicate that the proposed method improves the classification accuracy by approximately 20% in comparison with training using only the original images.


Author(s):  
Xinzhuo Zhao ◽  
Liyao Liu ◽  
Shouliang Qi ◽  
Yueyang Teng ◽  
Jianhua Li ◽  
...  

2020 ◽  
Vol 7 ◽  
Author(s):  
Hayden Gunraj ◽  
Linda Wang ◽  
Alexander Wong

The coronavirus disease 2019 (COVID-19) pandemic continues to have a tremendous impact on patients and healthcare systems around the world. In the fight against this novel disease, there is a pressing need for rapid and effective screening tools to identify patients infected with COVID-19, and to this end CT imaging has been proposed as one of the key screening methods which may be used as a complement to RT-PCR testing, particularly in situations where patients undergo routine CT scans for non-COVID-19 related reasons, patients have worsening respiratory status or developing complications that require expedited care, or patients are suspected to be COVID-19-positive but have negative RT-PCR test results. Early studies on CT-based screening have reported abnormalities in chest CT images which are characteristic of COVID-19 infection, but these abnormalities may be difficult to distinguish from abnormalities caused by other lung conditions. Motivated by this, in this study we introduce COVIDNet-CT, a deep convolutional neural network architecture that is tailored for detection of COVID-19 cases from chest CT images via a machine-driven design exploration approach. Additionally, we introduce COVIDx-CT, a benchmark CT image dataset derived from CT imaging data collected by the China National Center for Bioinformation comprising 104,009 images across 1,489 patient cases. Furthermore, in the interest of reliability and transparency, we leverage an explainability-driven performance validation strategy to investigate the decision-making behavior of COVIDNet-CT, and in doing so ensure that COVIDNet-CT makes predictions based on relevant indicators in CT images. Both COVIDNet-CT and the COVIDx-CT dataset are available to the general public in an open-source and open access manner as part of the COVID-Net initiative. While COVIDNet-CT is not yet a production-ready screening solution, we hope that releasing the model and dataset will encourage researchers, clinicians, and citizen data scientists alike to leverage and build upon them.


2021 ◽  
Vol 6 (1) ◽  
pp. 1-3
Author(s):  
Hayden Gunraj ◽  
Linda Wang ◽  
Alexander Wong

The COVID-19 pandemic continues to have a tremendous impact on patients and healthcare systems around the world. To combat this disease, there is a need for effective screening tools to identify patients infected with COVID-19, and to this end CT imaging has been proposed as a key screening method to complement RT-PCR testing. Early studies have reported abnormalities in chest CT images which are characteristic of COVID-19 infection, but these abnormalities may be difficult to distinguish from abnormalities caused by other lung conditions. Motivated by this, we introduce COVIDNet-CT, a deep convolutional neural network architecture tailored for detection of COVID-19 cases from chest CT images. We also introduce COVIDx-CT, a CT image dataset comprising 104,009 images across 1,489 patient cases. Finally, we leverage explainability to investigate the decision-making behaviour of COVIDNet-CT and ensure that COVIDNet-CT makes predictions based on relevant indicators in CT images.


Sign in / Sign up

Export Citation Format

Share Document