Development and validation of a deep learning system for comprehensive imaging quality check to classify body parts and contrast enhancement (Preprint)

2021 ◽  
Author(s):  
Seongwon Na ◽  
Yusub Sung ◽  
Yousun Ko ◽  
Youngbin Shin ◽  
Junghyun Lee ◽  
...  

BACKGROUND Despite the dramatic increase in the use of medical imaging in various therapeutic fields of clinical trials, image quality check is still performed manually by image analysts, which requires a lot of manpower and time. OBJECTIVE This study aimed to develop a deep learning model that simultaneously identifies anatomical locations and contrast enhancement on medical images, with accuracy and clinical effectiveness validation, to support an automated image quality check. METHODS In this retrospective study, 1,669 computed tomography (CT) images with five specific anatomical locations were collected from Asan Medical Center and Kangdong Sacred Heart Hospital. To generate the ground truth, two radiologists reviewed the anatomical locations and presence of contrast enhancement using the collected data. A deep learning framework called ImageQC-net (Image Quality Check-network) with transfer learning was developed using an InceptioResNetV2 model. To evaluate their clinical effectiveness, the overall accuracy and time spent on image quality check of a conventional model and ImageQC-net were compared. RESULTS The ImageQC-net body part classification showed an excellent performance in both internal (precision, 100%; recall, 100%; and accuracy, 100%) and external validation sets (precision, 99.34%; recall, 99.33%; and accuracy, 99.33%). In addition, the contrast-enhanced classification performance achieved 100% precision, recall, and accuracy in the internal validation set and ~100% accuracy in the external dataset (precision, 99.76%; recall, 99.79%; and accuracy, 99.78%). When integrating the model that achieved the best performance, the overall accuracy was 99.1%. For clinical effectiveness, the time reduction by artificial intelligence (AI)-aided quality check of both analyst 1 and 2 (49.7% and 48.3% decrease, respectively) was statistically significant (p < 0.001). CONCLUSIONS Comprehensive AI techniques to identify body parts and contrast enhancement on CT images are highly accurate and can significantly reduce the time spent on image quality checks.

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Young-Gon Kim ◽  
Sungchul Kim ◽  
Cristina Eunbee Cho ◽  
In Hye Song ◽  
Hee Jin Lee ◽  
...  

AbstractFast and accurate confirmation of metastasis on the frozen tissue section of intraoperative sentinel lymph node biopsy is an essential tool for critical surgical decisions. However, accurate diagnosis by pathologists is difficult within the time limitations. Training a robust and accurate deep learning model is also difficult owing to the limited number of frozen datasets with high quality labels. To overcome these issues, we validated the effectiveness of transfer learning from CAMELYON16 to improve performance of the convolutional neural network (CNN)-based classification model on our frozen dataset (N = 297) from Asan Medical Center (AMC). Among the 297 whole slide images (WSIs), 157 and 40 WSIs were used to train deep learning models with different dataset ratios at 2, 4, 8, 20, 40, and 100%. The remaining, i.e., 100 WSIs, were used to validate model performance in terms of patch- and slide-level classification. An additional 228 WSIs from Seoul National University Bundang Hospital (SNUBH) were used as an external validation. Three initial weights, i.e., scratch-based (random initialization), ImageNet-based, and CAMELYON16-based models were used to validate their effectiveness in external validation. In the patch-level classification results on the AMC dataset, CAMELYON16-based models trained with a small dataset (up to 40%, i.e., 62 WSIs) showed a significantly higher area under the curve (AUC) of 0.929 than those of the scratch- and ImageNet-based models at 0.897 and 0.919, respectively, while CAMELYON16-based and ImageNet-based models trained with 100% of the training dataset showed comparable AUCs at 0.944 and 0.943, respectively. For the external validation, CAMELYON16-based models showed higher AUCs than those of the scratch- and ImageNet-based models. Model performance for slide feasibility of the transfer learning to enhance model performance was validated in the case of frozen section datasets with limited numbers.


2018 ◽  
Vol 211 (6) ◽  
pp. 1184-1193 ◽  
Author(s):  
Kenneth A. Philbrick ◽  
Kotaro Yoshida ◽  
Dai Inoue ◽  
Zeynettin Akkus ◽  
Timothy L. Kline ◽  
...  

2021 ◽  
Vol 94 (1117) ◽  
pp. 20200677
Author(s):  
Andrea Steuwe ◽  
Marie Weber ◽  
Oliver Thomas Bethge ◽  
Christin Rademacher ◽  
Matthias Boschheidgen ◽  
...  

Objectives: Modern reconstruction and post-processing software aims at reducing image noise in CT images, potentially allowing for a reduction of the employed radiation exposure. This study aimed at assessing the influence of a novel deep-learning based software on the subjective and objective image quality compared to two traditional methods [filtered back-projection (FBP), iterative reconstruction (IR)]. Methods: In this institutional review board-approved retrospective study, abdominal low-dose CT images of 27 patients (mean age 38 ± 12 years, volumetric CT dose index 2.9 ± 1.8 mGy) were reconstructed with IR, FBP and, furthermore, post-processed using a novel software. For the three reconstructions, qualitative and quantitative image quality was evaluated by means of CT numbers, noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) in six different ROIs. Additionally, the reconstructions were compared using SNR, peak SNR, root mean square error and mean absolute error to assess structural differences. Results: On average, CT numbers varied within 1 Hounsfield unit (HU) for the three assessed methods in the assessed ROIs. In soft tissue, image noise was up to 42% lower compared to FBP and up to 27% lower to IR when applying the novel software. Consequently, SNR and CNR were highest with the novel software. For both IR and the novel software, subjective image quality was equal but higher than the image quality of FBP-images. Conclusion: The assessed software reduces image noise while maintaining image information, even in comparison to IR, allowing for a potential dose reduction of approximately 20% in abdominal CT imaging. Advances in knowledge: The assessed software reduces image noise by up to 27% compared to IR and 48% compared to FBP while maintaining the image information. The reduced image noise allows for a potential dose reduction of approximately 20% in abdominal imaging.


2021 ◽  
Vol 11 ◽  
Author(s):  
Bing Kang ◽  
Xianshun Yuan ◽  
Hexiang Wang ◽  
Songnan Qin ◽  
Xuelin Song ◽  
...  

ObjectiveTo develop and evaluate a deep learning model (DLM) for predicting the risk stratification of gastrointestinal stromal tumors (GISTs).MethodsPreoperative contrast-enhanced CT images of 733 patients with GISTs were retrospectively obtained from two centers between January 2011 and June 2020. The datasets were split into training (n = 241), testing (n = 104), and external validation cohorts (n = 388). A DLM for predicting the risk stratification of GISTs was developed using a convolutional neural network and evaluated in the testing and external validation cohorts. The performance of the DLM was compared with that of radiomics model by using the area under the receiver operating characteristic curves (AUROCs) and the Obuchowski index. The attention area of the DLM was visualized as a heatmap by gradient-weighted class activation mapping.ResultsIn the testing cohort, the DLM had AUROCs of 0.90 (95% confidence interval [CI]: 0.84, 0.96), 0.80 (95% CI: 0.72, 0.88), and 0.89 (95% CI: 0.83, 0.95) for low-malignant, intermediate-malignant, and high-malignant GISTs, respectively. In the external validation cohort, the AUROCs of the DLM were 0.87 (95% CI: 0.83, 0.91), 0.64 (95% CI: 0.60, 0.68), and 0.85 (95% CI: 0.81, 0.89) for low-malignant, intermediate-malignant, and high-malignant GISTs, respectively. The DLM (Obuchowski index: training, 0.84; external validation, 0.79) outperformed the radiomics model (Obuchowski index: training, 0.77; external validation, 0.77) for predicting risk stratification of GISTs. The relevant subregions were successfully highlighted with attention heatmap on the CT images for further clinical review.ConclusionThe DLM showed good performance for predicting the risk stratification of GISTs using CT images and achieved better performance than that of radiomics model.


2021 ◽  
Author(s):  
Haesung Yoon ◽  
Jisoo Kim ◽  
Hyun Ji Lim ◽  
Mi-Jung Lee

Abstract Background Efforts to reduce the radiation dose have continued steadily, with new reconstruction techniques. Recently, image denoising algorithms using artificial neural networks, termed deep learning reconstruction (DLR), have been applied to CT image reconstruction to overcome the drawbacks of iterative reconstruction (IR). The purpose of our study was to compare objective and subjective image quality of DLR and IR on pediatric abdomen and chest CT images.Methods This retrospective study included pediatric body CT images from February 2020 to October 2020, performed on 51 patients (34 boys and 17 girls; age 1–18 years). Non-contrast chest CT (n = 16), contrast-enhanced chest CT (n = 12), and contrast-enhanced abdomen CT (n = 23) images were included. Standard 50% adaptive statistical iterative reconstruction V (ASIR-V) images were compared to images with 100% ASIR-V and DLR at medium and high strengths. Attenuation, noise, contrast to noise ratio (CNR), and signal to noise (SNR) measurements were performed. Overall image quality, artifacts, and noise were subjectively assessed by two radiologists using a four-point scale (superior, average, suboptimal, and unacceptable). Quantitative and qualitative parameters were compared using repeated measures analysis of variance (ANOVA) with Bonferroni correction and Wilcoxon signed-rank tests.Results DLR had better CNR and SNR than 50% ASIR-V in both pediatric chest and abdomen CT images. When compared with 50% ASIR-V, high strength DLR was associated with noise reduction in non-contrast chest CT (33.0%), contrast-enhanced chest CT (39.6%), and contrast-enhanced abdomen CT (38.7%) with increases in CNR at 149.1%, 105.8% and 53.1% respectively. The subjective assessment of overall image quality and noise was also better on DLR images (p < 0.001). However, there was no significant difference in artifacts between reconstruction methods.Conclusion Compared with 50% ASIR-V, DLR improved pediatric body CT images with significant noise reduction. However, artifacts were not improved by DLR, regardless of strength.


2021 ◽  
Author(s):  
Wendy A Cooper ◽  
Laveniya Satgunaseelan ◽  
Ruta Gupta

In a recent study published in Nature Communications by Jiao W et al, a deep learning classifier was trained to predict cancer type based on somatic passenger mutations identified using whole genome sequencing (WGS) as part of the ICGC/TCGA Pan-Cancer Analysis of Whole Genomes (PCAWG) Consortium. The data show patterns of somatic passenger mutations differ between tumours with different cell of origin. Overall, the system had an accuracy of 91% in a cross-validation setting using the training set, and 88% and 83% using external validation sets of primary and metastatic tumours respectively. Surprisingly, this is claimed to be twice as accurate as trained pathologists, based on a 27 year old reference from 1993 prior to availability and routine utilisation of immunohistochemistry (IHC) in diagnostic pathology and is not a reflection of current diagnostic standards. We discuss the vital role of pathology in patient care and the importance of using international standards if deep learning methods are to be used in the clinical setting.


2020 ◽  
Author(s):  
Ying Zhu ◽  
Zhen-guo Liu ◽  
Lei Yang ◽  
Kefeng Wang ◽  
Ming-Hui Wang ◽  
...  

Abstract Objectives: Thymoma-associated myasthenia gravis (TAMG) is the most common paraneoplastic syndromeof thymoma. The screening of TAMG before thymoma resection is required to avoid severe perioperative complications, especiallyrespiratory failure. Herein, we developed a 3D DenseNet deep learning (DL) model based on preoperative computed tomography (CT) to detect TAMGin thymoma patients.Methods:A large cohort of 230 thymoma patientswere enrolled. 182 thymoma patients (81 with TAMG, 101 without TAMG) were used for training and model building. 48 cases from another hospital were used for external validation. A 3D-DenseNet-DL model and five machine learning models with radiomics features were performed to detectTAMG in thymoma patients. A comprehensive analysis by integrating 3D-DenseNet-DL model and general CT image features,named 3D-DenseNet-DL-based multi-model, was also performed to establish a more effective prediction model.Results: By elaborately comparing the prediction efficacy,the 3D-DenseNet-DL effectively identified TAMG patients, with a mean area under ROC curve (AUC), accuracy, sensitivity and specificity of 0.734, 0.724, 0.787 and 0.672, respectively. The effectiveness of the 3D-DenseNet-DL-based multi-model was further improved as evidenced bythe following metrics: AUC 0.766, accuracy 0.790, sensitivity 0.739 and specificity 0.801. External verification results confirmed the feasibility of this DL-based multi-model with metrics: AUC 0.730, accuracy 0.732, sensitivity 0.700 and specificity 0.690,respectively.Conclusions: Our 3D-DenseNet-DL model can effectively detect TAMG in patients with thymoma based on preoperative CT images. This model may serve as a non-invasive screening method or as a supplement to the conventional diagnostic criteria for identifyingTAMG.Key points:Thymoma-associated myasthenia gravis (TAMG) is a common paraneoplastic syndrome.3D-DenseNet-DL model can effectively detect TAMG based on preoperative CT images.This model may serve as a supplement for identifying TAMG.


2021 ◽  
Vol 11 ◽  
Author(s):  
Ge Ren ◽  
Sai-kit Lam ◽  
Jiang Zhang ◽  
Haonan Xiao ◽  
Andy Lai-yin Cheung ◽  
...  

Functional lung avoidance radiation therapy aims to minimize dose delivery to the normal lung tissue while favoring dose deposition in the defective lung tissue based on the regional function information. However, the clinical acquisition of pulmonary functional images is resource-demanding, inconvenient, and technically challenging. This study aims to investigate the deep learning-based lung functional image synthesis from the CT domain. Forty-two pulmonary macro-aggregated albumin SPECT/CT perfusion scans were retrospectively collected from the hospital. A deep learning-based framework (including image preparation, image processing, and proposed convolutional neural network) was adopted to extract features from 3D CT images and synthesize perfusion as estimations of regional lung function. Ablation experiments were performed to assess the effects of each framework component by removing each element of the framework and analyzing the testing performances. Major results showed that the removal of the CT contrast enhancement component in the image processing resulted in the largest drop in framework performance, compared to the optimal performance (~12%). In the CNN part, all the three components (residual module, ROI attention, and skip attention) were approximately equally important to the framework performance; removing one of them resulted in a 3–5% decline in performance. The proposed CNN improved ~4% overall performance and ~350% computational efficiency, compared to the U-Net model. The deep convolutional neural network, in conjunction with image processing for feature enhancement, is capable of feature extraction from CT images for pulmonary perfusion synthesis. In the proposed framework, image processing, especially CT contrast enhancement, plays a crucial role in the perfusion synthesis. This CTPM framework provides insights for relevant research studies in the future and enables other researchers to leverage for the development of optimized CNN models for functional lung avoidance radiation therapy.


2019 ◽  
Vol 37 (15_suppl) ◽  
pp. e13154-e13154
Author(s):  
Li Bai ◽  
Yanqing Zhou ◽  
Yaru Chen ◽  
Quanxing Liu ◽  
Dong Zhou ◽  
...  

e13154 Background: Many people harbor pulmonary nodules. Such nodules can be detected by low-dose computed tomography (LDCT) during regular physical examinations. If a pulmonary nodule is small (i.e. < 10mm), it is very difficult to diagnose whether it is benign or malignant using CT images alone. To address this problem, we developed a method based on liquid biopsy and deep learning to improve diagnostic accuracy of pulmonary nodules. Methods: Thirty-eight patientsharboring one or more small pulmonary nodules were enrolled in this study. Twenty-nine patients were diagnosed as having cancer (stage I = 21, stage II = 1, stage III = 3, stage IV = 4) using tissue biopsy, while the other 9 patients were diagnosed as having benign tumors or lung diseases other than cancer. For each patient, a blood sample was obtained prior to biopsy, and the cell free DNA (cfDNA) was sequenced using a 451-gene panel to a depth of 20,000×. The unique molecular identifiers (UMI) technique was applied to reduce false positives. Seventeen patients also had full-resolution CT images available. A deep learning system primarily based on deep convolutional neural networks (CNN) was used to analyze these CT images. Results: Sequence analysis of blood samples revealed that 75.8% (22/29) of cancer patients had detectable cancer related mutations, and only 1 of 9 (11.1%) non-cancer patient was found to carry a TP53 mutation. The most frequent mutations seen in cancer patients involved genes TP53 (N = 11), EGFR (N = 7), and KRAS (N = 3) with mutant allele fractions varying from 0.08% to 74.77%. Deep learning analysis of the 17 available CT images correctly identified cancers in 88.2% (15/17) of patients. However, by combining the liquid biopsy and image analysis results, all 17 patients were correctly diagnosed. Conclusions: Deep learning-based analysis of CT images can be applied to early diagnosis of lung cancers; but the accuracy of image analysis, when used alone, is only moderate. Diagnostic accuracy can be greatly improved using liquid biopsy as an auxiliary method in patients with pulmonary nodules.


Sign in / Sign up

Export Citation Format

Share Document