Performance of Deep Learning Model in Detecting Operable Lung Cancer With Chest Radiographs

2019 ◽  
Vol 34 (2) ◽  
pp. 86-91 ◽  
Author(s):  
Min Jae Cha ◽  
Myung Jin Chung ◽  
Jeong Hyun Lee ◽  
Kyung Soo Lee
2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 8536-8536
Author(s):  
Gouji Toyokawa ◽  
Fahdi Kanavati ◽  
Seiya Momosaki ◽  
Kengo Tateishi ◽  
Hiroaki Takeoka ◽  
...  

8536 Background: Lung cancer is the leading cause of cancer-related death in many countries, and its prognosis remains unsatisfactory. Since treatment approaches differ substantially based on the subtype, such as adenocarcinoma (ADC), squamous cell carcinoma (SCC) and small cell lung cancer (SCLC), an accurate histopathological diagnosis is of great importance. However, if the specimen is solely composed of poorly differentiated cancer cells, distinguishing between histological subtypes can be difficult. The present study developed a deep learning model to classify lung cancer subtypes from whole slide images (WSIs) of transbronchial lung biopsy (TBLB) specimens, in particular with the aim of using this model to evaluate a challenging test set of indeterminate cases. Methods: Our deep learning model consisted of two separately trained components: a convolutional neural network tile classifier and a recurrent neural network tile aggregator for the WSI diagnosis. We used a training set consisting of 638 WSIs of TBLB specimens to train a deep learning model to classify lung cancer subtypes (ADC, SCC and SCLC) and non-neoplastic lesions. The training set consisted of 593 WSIs for which the diagnosis had been determined by pathologists based on the visual inspection of Hematoxylin-Eosin (HE) slides and of 45 WSIs of indeterminate cases (64 ADCs and 19 SCCs). We then evaluated the models using five independent test sets. For each test set, we computed the receiver operator curve (ROC) area under the curve (AUC). Results: We applied the model to an indeterminate test set of WSIs obtained from TBLB specimens that pathologists had not been able to conclusively diagnose by examining the HE-stained specimens alone. Overall, the model achieved ROC AUCs of 0.993 (confidence interval [CI] 0.971-1.0) and 0.996 (0.981-1.0) for ADC and SCC, respectively. We further evaluated the model using five independent test sets consisting of both TBLB and surgically resected lung specimens (combined total of 2490 WSIs) and obtained highly promising results with ROC AUCs ranging from 0.94 to 0.99. Conclusions: In this study, we demonstrated that a deep learning model could be trained to predict lung cancer subtypes in indeterminate TBLB specimens. The extremely promising results obtained show that if deployed in clinical practice, a deep learning model that is capable of aiding pathologists in diagnosing indeterminate cases would be extremely beneficial as it would allow a diagnosis to be obtained sooner and reduce costs that would result from further investigations.


PLoS Medicine ◽  
2018 ◽  
Vol 15 (11) ◽  
pp. e1002683 ◽  
Author(s):  
John R. Zech ◽  
Marcus A. Badgeley ◽  
Manway Liu ◽  
Anthony B. Costa ◽  
Joseph J. Titano ◽  
...  

2021 ◽  
Vol 32 ◽  
pp. S926-S927
Author(s):  
G. Toyokawa ◽  
Y. Yamada ◽  
N. Haratake ◽  
Y. Shiraishi ◽  
T. Takenaka ◽  
...  

2020 ◽  
Vol 11 (12) ◽  
pp. 3615-3622 ◽  
Author(s):  
Lei Cong ◽  
Wanbing Feng ◽  
Zhigang Yao ◽  
Xiaoming Zhou ◽  
Wei Xiao

Diagnostics ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 1182
Author(s):  
Cheng-Yi Kao ◽  
Chiao-Yun Lin ◽  
Cheng-Chen Chao ◽  
Han-Sheng Huang ◽  
Hsing-Yu Lee ◽  
...  

We aimed to set up an Automated Radiology Alert System (ARAS) for the detection of pneumothorax in chest radiographs by a deep learning model, and to compare its efficiency and diagnostic performance with the existing Manual Radiology Alert System (MRAS) at the tertiary medical center. This study retrospectively collected 1235 chest radiographs with pneumothorax labeling from 2013 to 2019, and 337 chest radiographs with negative findings in 2019 were separated into training and validation datasets for the deep learning model of ARAS. The efficiency before and after using the model was compared in terms of alert time and report time. During parallel running of the two systems from September to October 2020, chest radiographs prospectively acquired in the emergency department with age more than 6 years served as the testing dataset for comparison of diagnostic performance. The efficiency was improved after using the model, with mean alert time improving from 8.45 min to 0.69 min and the mean report time from 2.81 days to 1.59 days. The comparison of the diagnostic performance of both systems using 3739 chest radiographs acquired during parallel running showed that the ARAS was better than the MRAS as assessed in terms of sensitivity (recall), area under receiver operating characteristic curve, and F1 score (0.837 vs. 0.256, 0.914 vs. 0.628, and 0.754 vs. 0.407, respectively), but worse in terms of positive predictive value (PPV) (precision) (0.686 vs. 1.000). This study had successfully designed a deep learning model for pneumothorax detection on chest radiographs and set up an ARAS with improved efficiency and overall diagnostic performance.


2020 ◽  
Author(s):  
Charlene Liew ◽  
Jessica Quah ◽  
Han Leong Goh ◽  
Narayan Venkataraman

AbstractBackgroundChest radiography may be used together with deep-learning models to prognosticate COVID-19 patient outcomesPurposeT o evaluate the performance of a deep-learning model for the prediction of severe patient outcomes from COVID-19 pneumonia on chest radiographs.MethodsA deep-learning model (CAPE: Covid-19 AI Predictive Engine) was trained on 2337 CXR images including 2103 used only for validation while training. The prospective test set consisted of CXR images (n=70) obtained from RT-PCR confirmed COVID-19 pneumonia patients between 1 January and 30 April 2020 in a single center. The radiographs were analyzed by the AI model. Model performance was obtained by receiver operating characteristic curve analysis.ResultsIn the prospective test set, the mean age of the patients was 46 (+/-16.2) years (84.2% male). The deep-learning model accurately predicted outcomes of ICU admission/mortality from COVID-19 pneumonia with an AUC of 0.79 (95% CI 0.79-0.96). Compared to traditional risk scoring systems for pneumonia based upon laboratory and clinical parameters, the model matched the EWS and MulBTSA risk scoring systems and outperformed CURB-65.ConclusionsA deep-learning model was able to predict severe patient outcomes (ICU admission and mortality) from COVID-19 on chest radiographs.Key ResultsA deep-learning model was able to predict severe patient outcomes (ICU admission and mortality) from COVID-19 from chest radiographs with an AUC of 0.79, which is comparable to traditional risk scoring systems for pneumonia.Summary StatementThis is a chest radiography-based AI model to prognosticate the risk of severe COVID-19 pneumonia outcomes.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Fahdi Kanavati ◽  
Gouji Toyokawa ◽  
Seiya Momosaki ◽  
Hiroaki Takeoka ◽  
Masaki Okamoto ◽  
...  

AbstractThe differentiation between major histological types of lung cancer, such as adenocarcinoma (ADC), squamous cell carcinoma (SCC), and small-cell lung cancer (SCLC) is of crucial importance for determining optimum cancer treatment. Hematoxylin and Eosin (H&E)-stained slides of small transbronchial lung biopsy (TBLB) are one of the primary sources for making a diagnosis; however, a subset of cases present a challenge for pathologists to diagnose from H&E-stained slides alone, and these either require further immunohistochemistry or are deferred to surgical resection for definitive diagnosis. We trained a deep learning model to classify H&E-stained Whole Slide Images of TBLB specimens into ADC, SCC, SCLC, and non-neoplastic using a training set of 579 WSIs. The trained model was capable of classifying an independent test set of 83 challenging indeterminate cases with a receiver operator curve area under the curve (AUC) of 0.99. We further evaluated the model on four independent test sets—one TBLB and three surgical, with combined total of 2407 WSIs—demonstrating highly promising results with AUCs ranging from 0.94 to 0.99.


Sign in / Sign up

Export Citation Format

Share Document