External Validation of a Deep Learning Model for Predicting Mammographic Breast Density in Routine Clinical Practice

Author(s):  
Brian N. Dontchos ◽  
Adam Yala ◽  
Regina Barzilay ◽  
Justin Xiang ◽  
Constance D. Lehman
2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 1550-1550
Author(s):  
Katherine Cavallo Hom ◽  
Brian Nicholas Dontchos ◽  
Sarah Mercaldo ◽  
Pragya Dang ◽  
Leslie Lamb ◽  
...  

1550 Background: Dense breast tissue is an independent risk factor for malignancy and can mask cancers on mammography. Yet, radiologist-assessed mammographic breast density is subjective and varies widely between and within radiologists. Our deep learning (DL) model was implemented into routine clinical practice at an academic breast imaging center and was externally validated at a separate community practice, with both sites demonstrating high clinical acceptance of the model’s density predictions. The aim of this study is to demonstrate the influence our DL model has on prospective radiologist density assessments in routine clinical practice. Methods: This IRB-approved, HIPAA-compliant retrospective study identified consecutive screening mammograms without exclusion performed across three clinical sites, over two time periods: pre-DL model implementation (January 1, 2017 through September 30, 2017) and post-DL model implementation (January 1, 2019 through September 30, 2019). Clinical sites were as follows: Site A (the academic practice where the DL model was developed and was implemented in late 2017); Site B (an affiliated community practice which implemented the DL model in late 2017 and was used for external validation); and Site C (an affiliated community practice which was never exposed to the DL model). Patient demographics and radiologist-assessed mammographic breast densities were compared over time and across sites. Patient characteristics were evaluated using Wilcoxon test and Pearson’s chi-squared test. Multivariable logistic regression models evaluated the odds of a dense breast classification as a function of time period (pre-DL vs post-DL), race (White vs non-White) and site. Results: A total of 85,865 consecutive screening mammograms across the three clinical sites were identified. After controlling for age and race, adjusted odds ratios (aOR) of a mammogram being classified as dense at Site C compared to Site B before the DL model was implemented was 2.01 (95% CI 1.873, 2.157, p<0.001). This increased to 2.827 (95% CI 2.636, 3.032, p< 0.001) after DL implementation. The aOR of a mammogram being classified as dense at Site A after implementation compared to before implementation was 0.924 (95% CI 0.885, 0.964, p<0.001). Conclusions: Our findings suggest implementation of the DL model influences radiologist’s prospective density assessments in routine clinical practice by reducing the odds of a screening exam being categorized as dense. As a result, clinical use of our model could reduce downstream costs of supplemental screening tests and limit unnecessary high-risk clinic evaluations.[Table: see text]


2021 ◽  
Vol 3 (1) ◽  
pp. e200015
Author(s):  
Thomas P. Matthews ◽  
Sadanand Singh ◽  
Brent Mombourquette ◽  
Jason Su ◽  
Meet P. Shah ◽  
...  

2021 ◽  
Vol 4 ◽  
pp. 4-4
Author(s):  
Matías N. Tajerian ◽  
Karina Pesce ◽  
Julia Frangella ◽  
Ezequiel Quiroga ◽  
Bruno Boietti ◽  
...  

2021 ◽  
Vol 11 ◽  
Author(s):  
Bing Kang ◽  
Xianshun Yuan ◽  
Hexiang Wang ◽  
Songnan Qin ◽  
Xuelin Song ◽  
...  

ObjectiveTo develop and evaluate a deep learning model (DLM) for predicting the risk stratification of gastrointestinal stromal tumors (GISTs).MethodsPreoperative contrast-enhanced CT images of 733 patients with GISTs were retrospectively obtained from two centers between January 2011 and June 2020. The datasets were split into training (n = 241), testing (n = 104), and external validation cohorts (n = 388). A DLM for predicting the risk stratification of GISTs was developed using a convolutional neural network and evaluated in the testing and external validation cohorts. The performance of the DLM was compared with that of radiomics model by using the area under the receiver operating characteristic curves (AUROCs) and the Obuchowski index. The attention area of the DLM was visualized as a heatmap by gradient-weighted class activation mapping.ResultsIn the testing cohort, the DLM had AUROCs of 0.90 (95% confidence interval [CI]: 0.84, 0.96), 0.80 (95% CI: 0.72, 0.88), and 0.89 (95% CI: 0.83, 0.95) for low-malignant, intermediate-malignant, and high-malignant GISTs, respectively. In the external validation cohort, the AUROCs of the DLM were 0.87 (95% CI: 0.83, 0.91), 0.64 (95% CI: 0.60, 0.68), and 0.85 (95% CI: 0.81, 0.89) for low-malignant, intermediate-malignant, and high-malignant GISTs, respectively. The DLM (Obuchowski index: training, 0.84; external validation, 0.79) outperformed the radiomics model (Obuchowski index: training, 0.77; external validation, 0.77) for predicting risk stratification of GISTs. The relevant subregions were successfully highlighted with attention heatmap on the CT images for further clinical review.ConclusionThe DLM showed good performance for predicting the risk stratification of GISTs using CT images and achieved better performance than that of radiomics model.


2019 ◽  
Author(s):  
Livia Faes ◽  
Siegfried K. Wagner ◽  
Dun Jack Fu ◽  
Xiaoxuan Liu ◽  
Edward Korot ◽  
...  

ABSTRACTDeep learning has huge potential to transform healthcare. However, significant expertise is required to train such models and this is a significant blocker for their translation into clinical practice. In this study, we therefore sought to evaluate the use of automated deep learning software to develop medical image diagnostic classifiers by healthcare professionals with limited coding – and no deep learning – expertise.We used five publicly available open-source datasets: (i) retinal fundus images (MESSIDOR); (ii) optical coherence tomography (OCT) images (Guangzhou Medical University/Shiley Eye Institute, Version 3); (iii) images of skin lesions (Human against Machine (HAM)10000) and (iv) both paediatric and adult chest X-ray (CXR) images (Guangzhou Medical University/Shiley Eye Institute, Version 3 and the National Institute of Health (NIH)14 dataset respectively) to separately feed into a neural architecture search framework that automatically developed a deep learning architecture to classify common diseases. Sensitivity (recall), specificity and positive predictive value (precision) were used to evaluate the diagnostic properties of the models. The discriminative performance was assessed using the area under the precision recall curve (AUPRC). In the case of the deep learning model developed on a subset of the HAM10000 dataset, we performed external validation using the Edinburgh Dermofit Library dataset.Diagnostic properties and discriminative performance from internal validations were high in the binary classification tasks (range: sensitivity of 73.3-97.0%, specificity of 67-100% and AUPRC of 0.87-1). In the multiple classification tasks, the diagnostic properties ranged from 38-100% for sensitivity and 67-100% for specificity. The discriminative performance in terms of AUPRC ranged from 0.57 to 1 in the five automated deep learning models. In an external validation using the Edinburgh Dermofit Library dataset, the automated deep learning model showed an AUPRC of 0.47, with a sensitivity of 49% and a positive predictive value of 52%. The quality of the open-access datasets used in this study (including the lack of information about patient flow and demographics) and the absence of measurement for precision, such as confidence intervals, constituted the major limitation of this study.All models, except for the automated deep learning model trained on the multi-label classification task of the NIH CXR14 dataset, showed comparable discriminative performance and diagnostic properties to state-of-the-art performing deep learning algorithms. The performance in the external validation study was low. The availability of automated deep learning may become a cornerstone for the democratization of sophisticated algorithmic modelling in healthcare as it allows the derivation of classification models without requiring a deep understanding of the mathematical, statistical and programming principles. Future studies should compare several application programming interfaces on thoroughly curated datasets.


2020 ◽  
Author(s):  
Wenying Zhou ◽  
Yang Yang ◽  
Cheng Yu ◽  
Juxian Liu ◽  
Xingxing Duan ◽  
...  

AbstractIt is still difficult to make accurate diagnosis of biliary atresia (BA) by sonographic gallbladder images particularly in rural area lacking relevant expertise. To provide an artificial intelligence solution to help diagnose BA based on sonographic gallbladder images, an ensembled deep learning model was developed based on a small set of sonographic images. The model yielded a patient-level sensitivity 93.1% and specificity 93.9% (with AUROC 0.956) on the multi-center external validation dataset, superior to that of human experts. With the help of the model, the performance of human experts with various levels would be improved further. Moreover, the diagnosis based on smartphone photos of sonographic gallbladder images through a smartphone app and based on video sequences by the model still yielded expert-level performance. Our study provides a deep learning solution to help radiologists improve BA diagnosis in various clinical application scenarios, particularly in rural and undeveloped regions with limited expertise.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Wenying Zhou ◽  
Yang Yang ◽  
Cheng Yu ◽  
Juxian Liu ◽  
Xingxing Duan ◽  
...  

AbstractIt is still challenging to make accurate diagnosis of biliary atresia (BA) with sonographic gallbladder images particularly in rural area without relevant expertise. To help diagnose BA based on sonographic gallbladder images, an ensembled deep learning model is developed. The model yields a patient-level sensitivity 93.1% and specificity 93.9% [with areas under the receiver operating characteristic curve of 0.956 (95% confidence interval: 0.928-0.977)] on the multi-center external validation dataset, superior to that of human experts. With the help of the model, the performances of human experts with various levels are improved. Moreover, the diagnosis based on smartphone photos of sonographic gallbladder images through a smartphone app and based on video sequences by the model still yields expert-level performances. The ensembled deep learning model in this study provides a solution to help radiologists improve the diagnosis of BA in various clinical application scenarios, particularly in rural and undeveloped regions with limited expertise.


Sign in / Sign up

Export Citation Format

Share Document