scholarly journals Deep Learning Models for Poorly Differentiated Colorectal Adenocarcinoma Classification in Whole Slide Images Using Transfer Learning

Diagnostics ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 2074
Author(s):  
Masayuki Tsuneki ◽  
Fahdi Kanavati

Colorectal poorly differentiated adenocarcinoma (ADC) is known to have a poor prognosis as compared with well to moderately differentiated ADC. The frequency of poorly differentiated ADC is relatively low (usually less than 5% among colorectal carcinomas). Histopathological diagnosis based on endoscopic biopsy specimens is currently the most cost effective method to perform as part of colonoscopic screening in average risk patients, and it is an area that could benefit from AI-based tools to aid pathologists in their clinical workflows. In this study, we trained deep learning models to classify poorly differentiated colorectal ADC from Whole Slide Images (WSIs) using a simple transfer learning method. We evaluated the models on a combination of test sets obtained from five distinct sources, achieving receiver operating characteristic curve (ROC) area under the curves (AUCs) up to 0.95 on 1799 test cases.

2021 ◽  
Author(s):  
Masayuki Tsuneki ◽  
Fahdi Kanavati

Colorectal poorly differentiated adenocarcinoma (ADC) is known to have a poor prognosis as compared with well to moderately differentiated ADC. The frequency of poorly differentiated ADC is relatively low (usually less than 5% among colorectal carcinomas). Histopathological diagnosis based on endoscopic biopsy specimens is currently the most cost effective method to perform as part of colonoscopic screening in average risk patients, and it is an area that could benefit from AI-based tools to aid pathologists in their clinical workflows. In this study, we trained deep learning models to classify poorly differentiated colorectal ADC from Whole Slide Images (WSIs) using a simply transfer learning method. We evaluated the models on a combination of test sets obtained from five distinct sources, achieving receiver operator curve (ROC) area under the curves (AUCs) in the range of 0.94-0.98.


2022 ◽  
Vol 2161 (1) ◽  
pp. 012078
Author(s):  
Pallavi R Mane ◽  
Rajat Shenoy ◽  
Ghanashyama Prabhu

Abstract COVID -19, is a deadly, dangerous and contagious disease caused by the novel corona virus. It is very important to detect COVID-19 infection accurately as quickly as possible to avoid the spreading. Deep learning methods can significantly improve the efficiency and accuracy of reading Chest X-Rays (CXRs). The existing Deep learning models with further fine tune provide cost effective, rapid, and better classification results. This paper tries to deploy well studied AI tools with modification on X-ray images to classify COVID 19. This research performs five experiments to classify COVID-19 CXRs from Normal and Viral Pneumonia CXRs using Convolutional Neural Networks (CNN). Four experiments were performed on state-of-the-art pre-trained models using transfer learning and one experiment was performed using a CNN designed from scratch. Dataset used for the experiments consists of chest X-Ray images from the Kaggle dataset and other publicly accessible sources. The data was split into three parts while 90% retained for training the models, 5% each was used in validation and testing of the constructed models. The four transfer learning models used were Inception, Xception, ResNet, and VGG19, that resulted in the test accuracies of 93.07%, 94.8%, 67.5%, and 91.1% respectively and our CNN model resulted in 94.6%.


2021 ◽  
Author(s):  
kaiwen wu ◽  
Bo Xu ◽  
Ying Wu

Abstract Manual recognition of breast ultrasound images is a heavy workload for radiologists and misdiagnosis. Traditional machine learning methods and deep learning methods require huge data sets and a lot of time for training. To solve the above problems, this paper had proposed a deep transfer learning method. the transfer learning models ResNet18 and ResNet50 after pre-training on the ImageNet dataset, and the ResNet18 and ResNet50 models without pre-training. The dataset consists of 131 breast ultrasound images (109 benign and 22 malignant), all of which had been collected, labeled and provided by UDIAT Diagnostic Center. The experimental results had shown that the pre-trained ResNet18 model has the best classification performance on breast ultrasound images. It had achieved an accuracy of 93.9%, an F1score of 0.94, and an area under the receiver operating characteristic curve (AUC) of 0.944. Compared with ordinary deep learning models, its classification performance had been greatly improved, which had proved the significant advantages of deep transfer learning in the classification of small samples of medical images.


2021 ◽  
Vol 11 (9) ◽  
pp. 4233
Author(s):  
Biprodip Pal ◽  
Debashis Gupta ◽  
Md. Rashed-Al-Mahfuz ◽  
Salem A. Alyami ◽  
Mohammad Ali Moni

The COVID-19 pandemic requires the rapid isolation of infected patients. Thus, high-sensitivity radiology images could be a key technique to diagnose patients besides the polymerase chain reaction approach. Deep learning algorithms are proposed in several studies to detect COVID-19 symptoms due to the success in chest radiography image classification, cost efficiency, lack of expert radiologists, and the need for faster processing in the pandemic area. Most of the promising algorithms proposed in different studies are based on pre-trained deep learning models. Such open-source models and lack of variation in the radiology image-capturing environment make the diagnosis system vulnerable to adversarial attacks such as fast gradient sign method (FGSM) attack. This study therefore explored the potential vulnerability of pre-trained convolutional neural network algorithms to the FGSM attack in terms of two frequently used models, VGG16 and Inception-v3. Firstly, we developed two transfer learning models for X-ray and CT image-based COVID-19 classification and analyzed the performance extensively in terms of accuracy, precision, recall, and AUC. Secondly, our study illustrates that misclassification can occur with a very minor perturbation magnitude, such as 0.009 and 0.003 for the FGSM attack in these models for X-ray and CT images, respectively, without any effect on the visual perceptibility of the perturbation. In addition, we demonstrated that successful FGSM attack can decrease the classification performance to 16.67% and 55.56% for X-ray images, as well as 36% and 40% in the case of CT images for VGG16 and Inception-v3, respectively, without any human-recognizable perturbation effects in the adversarial images. Finally, we analyzed that correct class probability of any test image which is supposed to be 1, can drop for both considered models and with increased perturbation; it can drop to 0.24 and 0.17 for the VGG16 model in cases of X-ray and CT images, respectively. Thus, despite the need for data sharing and automated diagnosis, practical deployment of such program requires more robustness.


Cancers ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 786
Author(s):  
Daniel M. Lang ◽  
Jan C. Peeken ◽  
Stephanie E. Combs ◽  
Jan J. Wilkens ◽  
Stefan Bartzsch

Infection with the human papillomavirus (HPV) has been identified as a major risk factor for oropharyngeal cancer (OPC). HPV-related OPCs have been shown to be more radiosensitive and to have a reduced risk for cancer related death. Hence, the histological determination of HPV status of cancer patients depicts an essential diagnostic factor. We investigated the ability of deep learning models for imaging based HPV status detection. To overcome the problem of small medical datasets, we used a transfer learning approach. A 3D convolutional network pre-trained on sports video clips was fine-tuned, such that full 3D information in the CT images could be exploited. The video pre-trained model was able to differentiate HPV-positive from HPV-negative cases, with an area under the receiver operating characteristic curve (AUC) of 0.81 for an external test set. In comparison to a 3D convolutional neural network (CNN) trained from scratch and a 2D architecture pre-trained on ImageNet, the video pre-trained model performed best. Deep learning models are capable of CT image-based HPV status determination. Video based pre-training has the ability to improve training for 3D medical data, but further studies are needed for verification.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2760
Author(s):  
Seungmin Oh ◽  
Akm Ashiquzzaman ◽  
Dongsu Lee ◽  
Yeonggwang Kim ◽  
Jinsul Kim

In recent years, various studies have begun to use deep learning models to conduct research in the field of human activity recognition (HAR). However, there has been a severe lag in the absolute development of such models since training deep learning models require a lot of labeled data. In fields such as HAR, it is difficult to collect data and there are high costs and efforts involved in manual labeling. The existing methods rely heavily on manual data collection and proper labeling of the data, which is done by human administrators. This often results in the data gathering process often being slow and prone to human-biased labeling. To address these problems, we proposed a new solution for the existing data gathering methods by reducing the labeling tasks conducted on new data based by using the data learned through the semi-supervised active transfer learning method. This method achieved 95.9% performance while also reducing labeling compared to the random sampling or active transfer learning methods.


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 8536-8536
Author(s):  
Gouji Toyokawa ◽  
Fahdi Kanavati ◽  
Seiya Momosaki ◽  
Kengo Tateishi ◽  
Hiroaki Takeoka ◽  
...  

8536 Background: Lung cancer is the leading cause of cancer-related death in many countries, and its prognosis remains unsatisfactory. Since treatment approaches differ substantially based on the subtype, such as adenocarcinoma (ADC), squamous cell carcinoma (SCC) and small cell lung cancer (SCLC), an accurate histopathological diagnosis is of great importance. However, if the specimen is solely composed of poorly differentiated cancer cells, distinguishing between histological subtypes can be difficult. The present study developed a deep learning model to classify lung cancer subtypes from whole slide images (WSIs) of transbronchial lung biopsy (TBLB) specimens, in particular with the aim of using this model to evaluate a challenging test set of indeterminate cases. Methods: Our deep learning model consisted of two separately trained components: a convolutional neural network tile classifier and a recurrent neural network tile aggregator for the WSI diagnosis. We used a training set consisting of 638 WSIs of TBLB specimens to train a deep learning model to classify lung cancer subtypes (ADC, SCC and SCLC) and non-neoplastic lesions. The training set consisted of 593 WSIs for which the diagnosis had been determined by pathologists based on the visual inspection of Hematoxylin-Eosin (HE) slides and of 45 WSIs of indeterminate cases (64 ADCs and 19 SCCs). We then evaluated the models using five independent test sets. For each test set, we computed the receiver operator curve (ROC) area under the curve (AUC). Results: We applied the model to an indeterminate test set of WSIs obtained from TBLB specimens that pathologists had not been able to conclusively diagnose by examining the HE-stained specimens alone. Overall, the model achieved ROC AUCs of 0.993 (confidence interval [CI] 0.971-1.0) and 0.996 (0.981-1.0) for ADC and SCC, respectively. We further evaluated the model using five independent test sets consisting of both TBLB and surgically resected lung specimens (combined total of 2490 WSIs) and obtained highly promising results with ROC AUCs ranging from 0.94 to 0.99. Conclusions: In this study, we demonstrated that a deep learning model could be trained to predict lung cancer subtypes in indeterminate TBLB specimens. The extremely promising results obtained show that if deployed in clinical practice, a deep learning model that is capable of aiding pathologists in diagnosing indeterminate cases would be extremely beneficial as it would allow a diagnosis to be obtained sooner and reduce costs that would result from further investigations.


2021 ◽  
Vol 27 ◽  
Author(s):  
Qi Zhou ◽  
Wenjie Zhu ◽  
Fuchen Li ◽  
Mingqing Yuan ◽  
Linfeng Zheng ◽  
...  

Objective: To verify the ability of the deep learning model in identifying five subtypes and normal images in noncontrast enhancement CT of intracranial hemorrhage. Method: A total of 351 patients (39 patients in the normal group, 312 patients in the intracranial hemorrhage group) performed with intracranial hemorrhage noncontrast enhanced CT were selected, with 2768 images in total (514 images for the normal group, 398 images for the epidural hemorrhage group, 501 images for the subdural hemorrhage group, 497 images for the intraventricular hemorrhage group, 415 images for the cerebral parenchymal hemorrhage group, and 443 images for the subarachnoid hemorrhage group). Based on the diagnostic reports of two radiologists with more than 10 years of experience, the ResNet-18 and DenseNet-121 deep learning models were selected. Transfer learning was used. 80% of the data was used for training models, 10% was used for validating model performance against overfitting, and the last 10% was used for the final evaluation of the model. Assessment indicators included accuracy, sensitivity, specificity, and AUC values. Results: The overall accuracy of ResNet-18 and DenseNet-121 models were 89.64% and 82.5%, respectively. The sensitivity and specificity of identifying five subtypes and normal images were above 0.80. The sensitivity of DenseNet-121 model to recognize intraventricular hemorrhage and cerebral parenchymal hemorrhage was lower than 0.80, 0.73, and 0.76 respectively. The AUC values of the two deep learning models were above 0.9. Conclusion: The deep learning model can accurately identify the five subtypes of intracranial hemorrhage and normal images, and it can be used as a new tool for clinical diagnosis in the future.


2021 ◽  
Vol 11 (23) ◽  
pp. 11423
Author(s):  
Chandrakanta Mahanty ◽  
Raghvendra Kumar ◽  
Panagiotis G. Asteris ◽  
Amir H. Gandomi

The COVID-19 pandemic has claimed the lives of millions of people and put a significant strain on healthcare facilities. To combat this disease, it is necessary to monitor affected patients in a timely and cost-effective manner. In this work, CXR images were used to identify COVID-19 patients. We compiled a CXR dataset with equal number of 2313 COVID positive, pneumonia and normal CXR images and utilized various transfer learning models as base classifiers, including VGG16, GoogleNet, and Xception. The proposed methodology combines fuzzy ensemble techniques, such as Majority Voting, Sugeno Integral, and Choquet Fuzzy, and adaptively combines the decision scores of the transfer learning models to identify coronavirus infection from CXR images. The proposed fuzzy ensemble methods outperformed each individual transfer learning technique and several state-of-the-art ensemble techniques in terms of accuracy and prediction. Specifically, VGG16 + Choquet Fuzzy, GoogleNet + Choquet Fuzzy, and Xception + Choquet Fuzzy achieved accuracies of 97.04%, 98.48%, and 99.57%, respectively. The results of this work are intended to help medical practitioners achieve an earlier detection of coronavirus compared to other detection strategies, which can further save millions of lives and advantageously influence society.


Sign in / Sign up

Export Citation Format

Share Document