scholarly journals Design of Optimal Deep Learning-Based Pancreatic Tumor and Nontumor Classification Model Using Computed Tomography Scans

2022 ◽  
Vol 2022 ◽  
pp. 1-15
Author(s):  
Maha M. Althobaiti ◽  
Ahmed Almulihi ◽  
Amal Adnan Ashour ◽  
Romany F. Mansour ◽  
Deepak Gupta

Pancreatic tumor is a lethal kind of tumor and its prediction is really poor in the current scenario. Automated pancreatic tumor classification using computer-aided diagnosis (CAD) model is necessary to track, predict, and classify the existence of pancreatic tumors. Artificial intelligence (AI) can offer extensive diagnostic expertise and accurate interventional image interpretation. With this motivation, this study designs an optimal deep learning based pancreatic tumor and nontumor classification (ODL-PTNTC) model using CT images. The goal of the ODL-PTNTC technique is to detect and classify the existence of pancreatic tumors and nontumor. The proposed ODL-PTNTC technique includes adaptive window filtering (AWF) technique to remove noise existing in it. In addition, sailfish optimizer based Kapur’s Thresholding (SFO-KT) technique is employed for image segmentation process. Moreover, feature extraction using Capsule Network (CapsNet) is derived to generate a set of feature vectors. Furthermore, Political Optimizer (PO) with Cascade Forward Neural Network (CFNN) is employed for classification purposes. In order to validate the enhanced performance of the ODL-PTNTC technique, a series of simulations take place and the results are investigated under several aspects. A comprehensive comparative results analysis stated the promising performance of the ODL-PTNTC technique over the recent approaches.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Ajanthaa Lakkshmanan ◽  
C. Anbu Ananth ◽  
S. Tiroumalmouroughane S. Tiroumalmouroughane

PurposeThe advancements of deep learning (DL) models demonstrate significant performance on accurate pancreatic tumor segmentation and classification.Design/methodology/approachThe presented model involves different stages of operations, namely preprocessing, image segmentation, feature extraction and image classification. Primarily, bilateral filtering (BF) technique is applied for image preprocessing to eradicate the noise present in the CT pancreatic image. Besides, noninteractive GrabCut (NIGC) algorithm is applied for the image segmentation process. Subsequently, residual network 152 (ResNet152) model is utilized as a feature extractor to originate a valuable set of feature vectors. At last, the red deer optimization algorithm (RDA) tuned backpropagation neural network (BPNN), called RDA-BPNN model, is employed as a classification model to determine the existence of pancreatic tumor.FindingsThe experimental results are validated in terms of different performance measures and a detailed comparative results analysis ensured the betterment of the RDA-BPNN model with the sensitivity of 98.54%, specificity of 98.46%, accuracy of 98.51% and F-score of 98.23%.Originality/valueThe study also identifies several novel automated deep learning based approaches used by researchers to assess the performance of the RDA-BPNN model on benchmark dataset and analyze the results in terms of several measures.


2021 ◽  
Vol 11 (2) ◽  
pp. 760
Author(s):  
Yun-ji Kim ◽  
Hyun Chin Cho ◽  
Hyun-chong Cho

Gastric cancer has a high mortality rate worldwide, but it can be prevented with early detection through regular gastroscopy. Herein, we propose a deep learning-based computer-aided diagnosis (CADx) system applying data augmentation to help doctors classify gastroscopy images as normal or abnormal. To improve the performance of deep learning, a large amount of training data are required. However, the collection of medical data, owing to their nature, is highly expensive and time consuming. Therefore, data were generated through deep convolutional generative adversarial networks (DCGAN), and 25 augmentation policies optimized for the CIFAR-10 dataset were implemented through AutoAugment to augment the data. Accordingly, a gastroscopy image was augmented, only high-quality images were selected through an image quality-measurement method, and gastroscopy images were classified as normal or abnormal through the Xception network. We compared the performances of the original training dataset, which did not improve, the dataset generated through the DCGAN, the dataset augmented through the augmentation policies of CIFAR-10, and the dataset combining the two methods. The dataset combining the two methods delivered the best performance in terms of accuracy (0.851) and achieved an improvement of 0.06 over the original training dataset. We confirmed that augmenting data through the DCGAN and CIFAR-10 augmentation policies is most suitable for the classification model for normal and abnormal gastric endoscopy images. The proposed method not only solves the medical-data problem but also improves the accuracy of gastric disease diagnosis.


Author(s):  
Amel Imene Hadj Bouzid ◽  
Said Yahiaoui ◽  
Anis Lounis ◽  
Sid-Ahmed Berrani ◽  
Hacène Belbachir ◽  
...  

Coronavirus disease is a pandemic that has infected millions of people around the world. Lung CT-scans are effective diagnostic tools, but radiologists can quickly become overwhelmed by the flow of infected patients. Therefore, automated image interpretation needs to be achieved. Deep learning (DL) can support critical medical tasks including diagnostics, and DL algorithms have successfully been applied to the classification and detection of many diseases. This work aims to use deep learning methods that can classify patients between Covid-19 positive and healthy patient. We collected 4 available datasets, and tested our convolutional neural networks (CNNs) on different distributions to investigate the generalizability of our models. In order to clearly explain the predictions, Grad-CAM and Fast-CAM visualization methods were used. Our approach reaches more than 92% accuracy on 2 different distributions. In addition, we propose a computer aided diagnosis web application for Covid-19 diagnosis. The results suggest that our proposed deep learning tool can be integrated to the Covid-19 detection process and be useful for a rapid patient management.


2021 ◽  
Author(s):  
Thanakorn Poomkur

The coronavirus disease of 2019(COVID-19) has been declared a pandemic and has raised worldwide concern. Lung inflammation and respiratory failure are commonly observed in moderate-to-severe cases. Radiography or chest X-ray imaging is compulsory for diagnosis, and interpretation is commonly performed by skilled medical specialists. In this study, we propose anew computer-aided diagnosis (CADx) tool for identifying chest X-ray images of COVID-19 infection using a multi-layer hybrid classification model (MLHC). The MLHC-COVID-19 consists of two layers, Layer I: Healthy and non-Healthy; Layer II: COVID-19 and non-COVID-19. The MLHC-COVID-19 was evaluated in real COVID-19 cases. The classification results showed promising performance comparable with other existing techniques considering the accuracy, sensitivity, and specificity of 96.20%, 96.20%, and 0.971%, respectively. This demonstrates the effectiveness of the MLHC-COVID-19 in classifying chest X-ray images, enhancing the accuracy of chest X-ray image interpretation with a reduction in the interpretation time. Furthermore, a detailed comparison of the MLHC-COVID-19 with other techniques has been presented.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Fan Yang ◽  
Zhi-Ri Tang ◽  
Jing Chen ◽  
Min Tang ◽  
Shengchun Wang ◽  
...  

Abstract Purpose The objective of this study is to construct a computer aided diagnosis system for normal people and pneumoconiosis using X-raysand deep learning algorithms. Materials and methods 1760 anonymous digital X-ray images of real patients between January 2017 and June 2020 were collected for this experiment. In order to concentrate the feature extraction ability of the model more on the lung region and restrain the influence of external background factors, a two-stage pipeline from coarse to fine was established. First, the U-Net model was used to extract the lung regions on each sides of the collection images. Second, the ResNet-34 model with transfer learning strategy was implemented to learn the image features extracted in the lung region to achieve accurate classification of pneumoconiosis patients and normal people. Results Among the 1760 cases collected, the accuracy and the area under curve of the classification model were 92.46% and 89% respectively. Conclusion The successful application of deep learning in the diagnosis of pneumoconiosis further demonstrates the potential of medical artificial intelligence and proves the effectiveness of our proposed algorithm. However, when we further classified pneumoconiosis patients and normal subjects into four categories, we found that the overall accuracy decreased to 70.1%. We will use the CT modality in future studies to provide more details of lung regions.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Samira Masoudi ◽  
Sherif Mehralivand ◽  
Stephanie A. Harmon ◽  
Nathan Lay ◽  
Liza Lindenberg ◽  
...  

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Young-Gon Kim ◽  
Sungchul Kim ◽  
Cristina Eunbee Cho ◽  
In Hye Song ◽  
Hee Jin Lee ◽  
...  

AbstractFast and accurate confirmation of metastasis on the frozen tissue section of intraoperative sentinel lymph node biopsy is an essential tool for critical surgical decisions. However, accurate diagnosis by pathologists is difficult within the time limitations. Training a robust and accurate deep learning model is also difficult owing to the limited number of frozen datasets with high quality labels. To overcome these issues, we validated the effectiveness of transfer learning from CAMELYON16 to improve performance of the convolutional neural network (CNN)-based classification model on our frozen dataset (N = 297) from Asan Medical Center (AMC). Among the 297 whole slide images (WSIs), 157 and 40 WSIs were used to train deep learning models with different dataset ratios at 2, 4, 8, 20, 40, and 100%. The remaining, i.e., 100 WSIs, were used to validate model performance in terms of patch- and slide-level classification. An additional 228 WSIs from Seoul National University Bundang Hospital (SNUBH) were used as an external validation. Three initial weights, i.e., scratch-based (random initialization), ImageNet-based, and CAMELYON16-based models were used to validate their effectiveness in external validation. In the patch-level classification results on the AMC dataset, CAMELYON16-based models trained with a small dataset (up to 40%, i.e., 62 WSIs) showed a significantly higher area under the curve (AUC) of 0.929 than those of the scratch- and ImageNet-based models at 0.897 and 0.919, respectively, while CAMELYON16-based and ImageNet-based models trained with 100% of the training dataset showed comparable AUCs at 0.944 and 0.943, respectively. For the external validation, CAMELYON16-based models showed higher AUCs than those of the scratch- and ImageNet-based models. Model performance for slide feasibility of the transfer learning to enhance model performance was validated in the case of frozen section datasets with limited numbers.


2019 ◽  
Vol 5 (1) ◽  
pp. 223-226
Author(s):  
Max-Heinrich Laves ◽  
Sontje Ihler ◽  
Tobias Ortmaier ◽  
Lüder A. Kahrs

AbstractIn this work, we discuss epistemic uncertainty estimation obtained by Bayesian inference in diagnostic classifiers and show that the prediction uncertainty highly correlates with goodness of prediction. We train the ResNet-18 image classifier on a dataset of 84,484 optical coherence tomography scans showing four different retinal conditions. Dropout is added before every building block of ResNet, creating an approximation to a Bayesian classifier. Monte Carlo sampling is applied with dropout at test time for uncertainty estimation. In Monte Carlo experiments, multiple forward passes are performed to get a distribution of the class labels. The variance and the entropy of the distribution is used as metrics for uncertainty. Our results show strong correlation with ρ = 0.99 between prediction uncertainty and prediction error. Mean uncertainty of incorrectly diagnosed cases was significantly higher than mean uncertainty of correctly diagnosed cases. Modeling of the prediction uncertainty in computer-aided diagnosis with deep learning yields more reliable results and is therefore expected to increase patient safety. This will help to transfer such systems into clinical routine and to increase the acceptance of machine learning in diagnosis from the standpoint of physicians and patients.


Sign in / Sign up

Export Citation Format

Share Document