Opportunistic osteoporosis screening using chest radiographs with deep learning: Development and external validation with a cohort dataset

Author(s):  
Miso Jang ◽  
Mingyu Kim ◽  
Sung Jin Bae ◽  
Seung Hun Lee ◽  
Jung‐Min Koh ◽  
...  
2020 ◽  
pp. 2003061
Author(s):  
Ju Gang Nam ◽  
Minchul Kim ◽  
Jongchan Park ◽  
Eui Jin Hwang ◽  
Jong Hyuk Lee ◽  
...  

We aimed to develop a deep-learning algorithm detecting 10 common abnormalities (DLAD-10) on chest radiographs and to evaluate its impact in diagnostic accuracy, timeliness of reporting, and workflow efficacy.DLAD-10 was trained with 146 717 radiographs from 108 053 patients using a ResNet34-based neural network with lesion-specific channels for 10 common radiologic abnormalities (pneumothorax, mediastinal widening, pneumoperitoneum, nodule/mass, consolidation, pleural effusion, linear atelectasis, fibrosis, calcification, and cardiomegaly). For external validation, the performance of DLAD-10 on a same-day CT-confirmed dataset (normal:abnormal, 53:147) and an open-source dataset (PadChest; normal:abnormal, 339:334) was compared to that of three radiologists. Separate simulated reading tests were conducted on another dataset adjusted to real-world disease prevalence in the emergency department, consisting of four critical, 52 urgent, and 146 non-urgent cases. Six radiologists participated in the simulated reading sessions with and without DLAD-10.DLAD-10 exhibited areas under the receiver-operating characteristic curves (AUROCs) of 0.895–1.00 in the CT-confirmed dataset and 0.913–0.997 in the PadChest dataset. DLAD-10 correctly classified significantly more critical abnormalities (95.0% [57/60]) than pooled radiologists (84.4% [152/180]; p=0.01). In simulated reading tests for emergency department patients, pooled readers detected significantly more critical (70.8% [17/24] versus 29.2% [7/24]; p=0.006) and urgent (82.7% [258/312] versus 78.2% [244/312]; p=0.04) abnormalities when aided by DLAD-10. DLAD-10 assistance shortened the mean time-to-report critical and urgent radiographs (640.5±466.3 versus 3371.0±1352.5 s and 1840.3±1141.1 versus 2127.1±1468.2, respectively; p-values<0.01) and reduced the mean interpretation time (20.5±22.8 versus 23.5±23.7 s; p<0.001).DLAD-10 showed excellent performance, improving radiologists' performance and shortening the reporting time for critical and urgent cases.


2021 ◽  
pp. e200190
Author(s):  
Yee Liang Thian ◽  
Dian Wen Ng ◽  
James Thomas Patrick Decourcy Hallinan ◽  
Pooja Jagmohan ◽  
David Soon Yiew Sia ◽  
...  

Diagnostics ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1127
Author(s):  
Ji Hyung Nam ◽  
Dong Jun Oh ◽  
Sumin Lee ◽  
Hyun Joo Song ◽  
Yun Jeong Lim

Capsule endoscopy (CE) quality control requires an objective scoring system to evaluate the preparation of the small bowel (SB). We propose a deep learning algorithm to calculate SB cleansing scores and verify the algorithm’s performance. A 5-point scoring system based on clarity of mucosal visualization was used to develop the deep learning algorithm (400,000 frames; 280,000 for training and 120,000 for testing). External validation was performed using additional CE cases (n = 50), and average cleansing scores (1.0 to 5.0) calculated using the algorithm were compared to clinical grades (A to C) assigned by clinicians. Test results obtained using 120,000 frames exhibited 93% accuracy. The separate CE case exhibited substantial agreement between the deep learning algorithm scores and clinicians’ assessments (Cohen’s kappa: 0.672). In the external validation, the cleansing score decreased with worsening clinical grade (scores of 3.9, 3.2, and 2.5 for grades A, B, and C, respectively, p < 0.001). Receiver operating characteristic curve analysis revealed that a cleansing score cut-off of 2.95 indicated clinically adequate preparation. This algorithm provides an objective and automated cleansing score for evaluating SB preparation for CE. The results of this study will serve as clinical evidence supporting the practical use of deep learning algorithms for evaluating SB preparation quality.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Young-Gon Kim ◽  
Sungchul Kim ◽  
Cristina Eunbee Cho ◽  
In Hye Song ◽  
Hee Jin Lee ◽  
...  

AbstractFast and accurate confirmation of metastasis on the frozen tissue section of intraoperative sentinel lymph node biopsy is an essential tool for critical surgical decisions. However, accurate diagnosis by pathologists is difficult within the time limitations. Training a robust and accurate deep learning model is also difficult owing to the limited number of frozen datasets with high quality labels. To overcome these issues, we validated the effectiveness of transfer learning from CAMELYON16 to improve performance of the convolutional neural network (CNN)-based classification model on our frozen dataset (N = 297) from Asan Medical Center (AMC). Among the 297 whole slide images (WSIs), 157 and 40 WSIs were used to train deep learning models with different dataset ratios at 2, 4, 8, 20, 40, and 100%. The remaining, i.e., 100 WSIs, were used to validate model performance in terms of patch- and slide-level classification. An additional 228 WSIs from Seoul National University Bundang Hospital (SNUBH) were used as an external validation. Three initial weights, i.e., scratch-based (random initialization), ImageNet-based, and CAMELYON16-based models were used to validate their effectiveness in external validation. In the patch-level classification results on the AMC dataset, CAMELYON16-based models trained with a small dataset (up to 40%, i.e., 62 WSIs) showed a significantly higher area under the curve (AUC) of 0.929 than those of the scratch- and ImageNet-based models at 0.897 and 0.919, respectively, while CAMELYON16-based and ImageNet-based models trained with 100% of the training dataset showed comparable AUCs at 0.944 and 0.943, respectively. For the external validation, CAMELYON16-based models showed higher AUCs than those of the scratch- and ImageNet-based models. Model performance for slide feasibility of the transfer learning to enhance model performance was validated in the case of frozen section datasets with limited numbers.


Cancers ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 2866
Author(s):  
Fernando Navarro ◽  
Hendrik Dapper ◽  
Rebecca Asadpour ◽  
Carolin Knebel ◽  
Matthew B. Spraker ◽  
...  

Background: In patients with soft-tissue sarcomas, tumor grading constitutes a decisive factor to determine the best treatment decision. Tumor grading is obtained by pathological work-up after focal biopsies. Deep learning (DL)-based imaging analysis may pose an alternative way to characterize STS tissue. In this work, we sought to non-invasively differentiate tumor grading into low-grade (G1) and high-grade (G2/G3) STS using DL techniques based on MR-imaging. Methods: Contrast-enhanced T1-weighted fat-saturated (T1FSGd) MRI sequences and fat-saturated T2-weighted (T2FS) sequences were collected from two independent retrospective cohorts (training: 148 patients, testing: 158 patients). Tumor grading was determined following the French Federation of Cancer Centers Sarcoma Group in pre-therapeutic biopsies. DL models were developed using transfer learning based on the DenseNet 161 architecture. Results: The T1FSGd and T2FS-based DL models achieved area under the receiver operator characteristic curve (AUC) values of 0.75 and 0.76 on the test cohort, respectively. T1FSGd achieved the best F1-score of all models (0.90). The T2FS-based DL model was able to significantly risk-stratify for overall survival. Attention maps revealed relevant features within the tumor volume and in border regions. Conclusions: MRI-based DL models are capable of predicting tumor grading with good reproducibility in external validation.


Author(s):  
Paul H. Yi ◽  
Jinchi Wei ◽  
Tae Kyung Kim ◽  
Jiwon Shin ◽  
Haris I. Sair ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document