Deep Learning for Computer-aided Diagnosis of Pneumoconiosis

Author(s):  
Zheng Wang ◽  
Qingjun Qian ◽  
Jianfang Zhang ◽  
Caihong Duo ◽  
Wen He ◽  
...  

Abstract Background: The diagnosis of pneumoconiosis relies primarily on chest radiographs and exhibits significant variability between physicians. Computer-aided diagnosis (CAD) can improve the accuracy and consistency of these diagnoses. However, CAD based on machine learning requires extensive human intervention and time-consuming training. As such, deep learning has become a popular tool for the development of CAD models. In this study, the clinical applicability of CAD based on deep learning was verified for pneumoconiosis patients.Methods: Chest radiographs were collected from 5424 occupational health examiners who met the inclusion criteria. The data were divided into training, validation, and test sets. The CAD algorithm was then trained and applied to processing of the validation set, while the test set was used to evaluate diagnostic efficacy. Three junior and three senior physicians provided independent diagnoses using images from the test set and a comprehensive diagnosis for comparison with the CAD results. A receiver operating characteristic (ROC) curve was used to evaluate the diagnostic efficiency of the proposed CAD system. A McNemar test was used to evaluate diagnostic sensitivity and specificity for pneumoconiosis, both before and after the use of CAD. A kappa consistency test was used to evaluate the diagnostic consistency for both the algorithm and the clinicians.Results: ROC results suggested the proposed CAD model achieved high accuracy in the diagnosis of pneumoconiosis, with a kappa value of 0.90. The sensitivity, specificity, and kappa values for the junior doctors increased from 0.86 to 0.98, 0.68 to 0.86, and 0.54 to 0.84, respectively (p<0.05), when CAD was applied. However, metrics for the senior doctors were not significantly different.Conclusion: DL-based CAD can improve the diagnostic sensitivity, specificity, and consistency of pneumoconiosis diagnoses, particularly for junior physicians.

2021 ◽  
Author(s):  
Hongtao Ji ◽  
Qiang Zhu ◽  
Teng Ma ◽  
Yun Cheng ◽  
Shuai Zhou ◽  
...  

Abstract Background: Significant differences exist in classification outcomes for radiologists using ultrasonography-based breast imaging-reporting and data systems for diagnosing category 3–5 (BI-RADS-US 3–5) breast nodules, due to a lack of clear and distinguishing image features. As such, this study investigates the use of a transformer-based computer-aided diagnosis (CAD) model for improved BI-RADS-US 3–5 classification consistency.Methods: Five radiologists independently performed BI-RADS-US annotations on a breast ultrasonography image set collected from 20 hospitals in China. The data were divided into training, validation, testing, and sampling sets. The trained transformer-based CAD model was then used to classify test images, for which sensitivity, specificity, and accuracy were calculated. Variations in these metrics among the 5 radiologists were analyzed by referencing BI-RADS-US classification results for the sampling test set, provided by CAD, to determine whether classification consistency (the kappa value),sensitivity, specificity, and accuracy had improved.Results: Classification accuracy for the CAD model applied to the test set was 95.7% for category 3 nodules, 97.6% for category 4A nodules, 95.60% for category 4B nodules, 94.2% for category 4C nodules, and 97.5% for category 5 nodules. Adjustments were made to 1,583 nodules, as 905 were classified to a higher category and 678 to a lower category in the sampling test set. As a result, the accuracy, sensitivity, and specificity of classification by each radiologist improved, with the consistency (kappa values) for all radiologists increasing to >0.60.Conclusions: The proposed transformer-based CAD model improved BI-RADS-US 3–5 nodule classification by individual radiologists and increased diagnostic consistency.


2019 ◽  
Vol 16 (10) ◽  
pp. 4202-4213
Author(s):  
Priyanka Malhotra ◽  
Sheifali Gupta ◽  
Deepika Koundal

Pneumonia is a deadly chest disease and is a major culprit behind numerous deaths every year. Chest radiographs (CXR) are commonly used for quick and cheap diagnosis of chest diseases. The interpretation of CXR’s for diagnosing pneumonia is difficult. This has created an interest in computer-aided diagnosis (CAD) for CXR images. In this study, a brief review of literature based on computer aided analysis of chest radiograph images for identification of pneumonia using different machine learning and deep learning models is presented and a comparison of these different techniques has been provided. In addition, the study also presents various publicly available chest X-ray data sets for training, testing and validation of deep learning models.


2019 ◽  
Vol 5 (1) ◽  
pp. 223-226
Author(s):  
Max-Heinrich Laves ◽  
Sontje Ihler ◽  
Tobias Ortmaier ◽  
Lüder A. Kahrs

AbstractIn this work, we discuss epistemic uncertainty estimation obtained by Bayesian inference in diagnostic classifiers and show that the prediction uncertainty highly correlates with goodness of prediction. We train the ResNet-18 image classifier on a dataset of 84,484 optical coherence tomography scans showing four different retinal conditions. Dropout is added before every building block of ResNet, creating an approximation to a Bayesian classifier. Monte Carlo sampling is applied with dropout at test time for uncertainty estimation. In Monte Carlo experiments, multiple forward passes are performed to get a distribution of the class labels. The variance and the entropy of the distribution is used as metrics for uncertainty. Our results show strong correlation with ρ = 0.99 between prediction uncertainty and prediction error. Mean uncertainty of incorrectly diagnosed cases was significantly higher than mean uncertainty of correctly diagnosed cases. Modeling of the prediction uncertainty in computer-aided diagnosis with deep learning yields more reliable results and is therefore expected to increase patient safety. This will help to transfer such systems into clinical routine and to increase the acceptance of machine learning in diagnosis from the standpoint of physicians and patients.


Diagnostics ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 973
Author(s):  
Valentina Giannini ◽  
Simone Mazzetti ◽  
Giovanni Cappello ◽  
Valeria Maria Doronzio ◽  
Lorenzo Vassallo ◽  
...  

Recently, Computer Aided Diagnosis (CAD) systems have been proposed to help radiologists in detecting and characterizing Prostate Cancer (PCa). However, few studies evaluated the performances of these systems in a clinical setting, especially when used by non-experienced readers. The main aim of this study is to assess the diagnostic performance of non-experienced readers when reporting assisted by the likelihood map generated by a CAD system, and to compare the results with the unassisted interpretation. Three resident radiologists were asked to review multiparametric-MRI of patients with and without PCa, both unassisted and assisted by a CAD system. In both reading sessions, residents recorded all positive cases, and sensitivity, specificity, negative and positive predictive values were computed and compared. The dataset comprised 90 patients (45 with at least one clinically significant biopsy-confirmed PCa). Sensitivity significantly increased in the CAD assisted mode for patients with at least one clinically significant lesion (GS > 6) (68.7% vs. 78.1%, p = 0.018). Overall specificity was not statistically different between unassisted and assisted sessions (94.8% vs. 89.6, p = 0.072). The use of the CAD system significantly increases the per-patient sensitivity of inexperienced readers in the detection of clinically significant PCa, without negatively affecting specificity, while significantly reducing overall reporting time.


Diagnostics ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 694
Author(s):  
Xuejiao Pang ◽  
Zijian Zhao ◽  
Ying Weng

At present, the application of artificial intelligence (AI) based on deep learning in the medical field has become more extensive and suitable for clinical practice compared with traditional machine learning. The application of traditional machine learning approaches to clinical practice is very challenging because medical data are usually uncharacteristic. However, deep learning methods with self-learning abilities can effectively make use of excellent computing abilities to learn intricate and abstract features. Thus, they are promising for the classification and detection of lesions through gastrointestinal endoscopy using a computer-aided diagnosis (CAD) system based on deep learning. This study aimed to address the research development of a CAD system based on deep learning in order to assist doctors in classifying and detecting lesions in the stomach, intestines, and esophagus. It also summarized the limitations of the current methods and finally presented a prospect for future research.


2021 ◽  
Vol 11 (2) ◽  
pp. 760
Author(s):  
Yun-ji Kim ◽  
Hyun Chin Cho ◽  
Hyun-chong Cho

Gastric cancer has a high mortality rate worldwide, but it can be prevented with early detection through regular gastroscopy. Herein, we propose a deep learning-based computer-aided diagnosis (CADx) system applying data augmentation to help doctors classify gastroscopy images as normal or abnormal. To improve the performance of deep learning, a large amount of training data are required. However, the collection of medical data, owing to their nature, is highly expensive and time consuming. Therefore, data were generated through deep convolutional generative adversarial networks (DCGAN), and 25 augmentation policies optimized for the CIFAR-10 dataset were implemented through AutoAugment to augment the data. Accordingly, a gastroscopy image was augmented, only high-quality images were selected through an image quality-measurement method, and gastroscopy images were classified as normal or abnormal through the Xception network. We compared the performances of the original training dataset, which did not improve, the dataset generated through the DCGAN, the dataset augmented through the augmentation policies of CIFAR-10, and the dataset combining the two methods. The dataset combining the two methods delivered the best performance in terms of accuracy (0.851) and achieved an improvement of 0.06 over the original training dataset. We confirmed that augmenting data through the DCGAN and CIFAR-10 augmentation policies is most suitable for the classification model for normal and abnormal gastric endoscopy images. The proposed method not only solves the medical-data problem but also improves the accuracy of gastric disease diagnosis.


Author(s):  
Kamyab Keshtkar

As a relatively high percentage of adenoma polyps are missed, a computer-aided diagnosis (CAD) tool based on deep learning can aid the endoscopist in diagnosing colorectal polyps or colorectal cancer in order to decrease polyps missing rate and prevent colorectal cancer mortality. Convolutional Neural Network (CNN) is a deep learning method and has achieved better results in detecting and segmenting specific objects in images in the last decade than conventional models such as regression, support vector machines or artificial neural networks. In recent years, based on the studies in medical imaging criteria, CNN models have acquired promising results in detecting masses and lesions in various body organs, including colorectal polyps. In this review, the structure and architecture of CNN models and how colonoscopy images are processed as input and converted to the output are explained in detail. In most primary studies conducted in the colorectal polyp detection and classification field, the CNN model has been regarded as a black box since the calculations performed at different layers in the model training process have not been clarified precisely. Furthermore, I discuss the differences between the CNN and conventional models, inspect how to train the CNN model for diagnosing colorectal polyps or cancer, and evaluate model performance after the training process.


Sign in / Sign up

Export Citation Format

Share Document