scholarly journals Vulnerability in Deep Transfer Learning Models to Adversarial Fast Gradient Sign Attack for COVID-19 Prediction from Chest Radiography Images

2021 ◽  
Vol 11 (9) ◽  
pp. 4233
Author(s):  
Biprodip Pal ◽  
Debashis Gupta ◽  
Md. Rashed-Al-Mahfuz ◽  
Salem A. Alyami ◽  
Mohammad Ali Moni

The COVID-19 pandemic requires the rapid isolation of infected patients. Thus, high-sensitivity radiology images could be a key technique to diagnose patients besides the polymerase chain reaction approach. Deep learning algorithms are proposed in several studies to detect COVID-19 symptoms due to the success in chest radiography image classification, cost efficiency, lack of expert radiologists, and the need for faster processing in the pandemic area. Most of the promising algorithms proposed in different studies are based on pre-trained deep learning models. Such open-source models and lack of variation in the radiology image-capturing environment make the diagnosis system vulnerable to adversarial attacks such as fast gradient sign method (FGSM) attack. This study therefore explored the potential vulnerability of pre-trained convolutional neural network algorithms to the FGSM attack in terms of two frequently used models, VGG16 and Inception-v3. Firstly, we developed two transfer learning models for X-ray and CT image-based COVID-19 classification and analyzed the performance extensively in terms of accuracy, precision, recall, and AUC. Secondly, our study illustrates that misclassification can occur with a very minor perturbation magnitude, such as 0.009 and 0.003 for the FGSM attack in these models for X-ray and CT images, respectively, without any effect on the visual perceptibility of the perturbation. In addition, we demonstrated that successful FGSM attack can decrease the classification performance to 16.67% and 55.56% for X-ray images, as well as 36% and 40% in the case of CT images for VGG16 and Inception-v3, respectively, without any human-recognizable perturbation effects in the adversarial images. Finally, we analyzed that correct class probability of any test image which is supposed to be 1, can drop for both considered models and with increased perturbation; it can drop to 0.24 and 0.17 for the VGG16 model in cases of X-ray and CT images, respectively. Thus, despite the need for data sharing and automated diagnosis, practical deployment of such program requires more robustness.

2020 ◽  
Author(s):  
Biprodip Pal ◽  
Debashis Gupta ◽  
Md Rashed-Al Mahfuz ◽  
Mohammad Ali Moni ◽  
Salem A. Alyami

BACKGROUND COVID-19 pandemic requires quick isolation of infected patients. Thus high sensitivity of radiology images could be a key technique to diagnose symptoms besides the PCR approach. Pre-trained deep learning algorithms are proposed in several studies to detect COVID-19 symptoms due to the success in radiology image classification, cost efficiency, lack of expert radiologists and faster processing requirement in pandemic area. Such open-source models, parameters, data sharing to generate big data repository for rare diseases and lack of variation in the radiology image-capturing environment makes the diagnosis system vulnerable to adversarial attacks like Fast Gradient Sign Method based attack. OBJECTIVE This study aims to explore the potential vulnerability in the state of the art deep transfer learning models for COVID-19 classification from chest radiography image, to Fast Gradient Sign Method based adversarial attack. METHODS Firstly, we developed two transfer learning models for X-ray and CT image based COVID-19 classification from frequently used VGG16 and InceptionV3 Convolutional Neural Network architecture. We analyzed the performance extensively in terms of accuracy, precision, recall, and AUC. Secondly, we crafted the FGSM attack for these prediction models and illustrated the adversarial perturbation variation effect for this attack on the visual perceptibility of the radiography images through proper visualization. Thirdly, we computed the decrement in overall accuracy, correct classification probability score and total misclassified samples to quantify the performance drop of these models. The experiments were validated using publicly available COVID-19 patient data. RESULTS We collected publicly available, labeled 268 Xray and 746 CT images. The performance of the developed transfer learning models reached above 95% accuracy with F1 and AUC score close to 1 for both X-ray and CT image based COVID-19 classification before the attack. Then our study illustrates that the misclassification can occur with a very minor perturbation of 0.009 and 0.003 for the FGSM attack in these models for Xray and CT images respectively without any effect on the visual perceptibility of these images. In addition, we demonstrated that successful FGSM attack can decrease the accuracy by 16.67% and 55% for Xray images and 70% and 40% for CT images while classifying using VGG16 and InceptionV3 respectively. Finally, the correct class probability of any test image is found to drop from 1 to 0.24 and 0.17 for VGG16 model for Xray and CT images respectively. CONCLUSIONS Frequently used chest radiology based COVID-19 detection models like VGG16 and InceptionV3 can significantly suffer from FGSM attack. Extensive analysis of probability score, misclassifications, perturbation effect on visual perception clearly illustrates the vulnerability. The InceptionV3 model is found to be more robust than VGG16 although FGSM can make them vulnerable. Thus despite the need for data sharing and automated diagnosis, practical deployment of such program asks for more robustness.


2020 ◽  
Vol 28 (5) ◽  
pp. 841-850
Author(s):  
Saleh Albahli ◽  
Waleed Albattah

OBJECTIVE: This study aims to employ the advantages of computer vision and medical image analysis to develop an automated model that has the clinical potential for early detection of novel coronavirus (COVID-19) infected disease. METHOD: This study applied transfer learning method to develop deep learning models for detecting COVID-19 disease. Three existing state-of-the-art deep learning models namely, Inception ResNetV2, InceptionNetV3 and NASNetLarge, were selected and fine-tuned to automatically detect and diagnose COVID-19 disease using chest X-ray images. A dataset involving 850 images with the confirmed COVID-19 disease, 500 images of community-acquired (non-COVID-19) pneumonia cases and 915 normal chest X-ray images was used in this study. RESULTS: Among the three models, InceptionNetV3 yielded the best performance with accuracy levels of 98.63% and 99.02% with and without using data augmentation in model training, respectively. All the performed networks tend to overfitting (with high training accuracy) when data augmentation is not used, this is due to the limited amount of image data used for training and validation. CONCLUSION: This study demonstrated that a deep transfer learning is feasible to detect COVID-19 disease automatically from chest X-ray by training the learning model with chest X-ray images mixed with COVID-19 patients, other pneumonia affected patients and people with healthy lungs, which may help doctors more effectively make their clinical decisions. The study also gives an insight to how transfer learning was used to automatically detect the COVID-19 disease. In future studies, as the amount of available dataset increases, different convolution neutral network models could be designed to achieve the goal more efficiently.


2022 ◽  
Vol 2161 (1) ◽  
pp. 012078
Author(s):  
Pallavi R Mane ◽  
Rajat Shenoy ◽  
Ghanashyama Prabhu

Abstract COVID -19, is a deadly, dangerous and contagious disease caused by the novel corona virus. It is very important to detect COVID-19 infection accurately as quickly as possible to avoid the spreading. Deep learning methods can significantly improve the efficiency and accuracy of reading Chest X-Rays (CXRs). The existing Deep learning models with further fine tune provide cost effective, rapid, and better classification results. This paper tries to deploy well studied AI tools with modification on X-ray images to classify COVID 19. This research performs five experiments to classify COVID-19 CXRs from Normal and Viral Pneumonia CXRs using Convolutional Neural Networks (CNN). Four experiments were performed on state-of-the-art pre-trained models using transfer learning and one experiment was performed using a CNN designed from scratch. Dataset used for the experiments consists of chest X-Ray images from the Kaggle dataset and other publicly accessible sources. The data was split into three parts while 90% retained for training the models, 5% each was used in validation and testing of the constructed models. The four transfer learning models used were Inception, Xception, ResNet, and VGG19, that resulted in the test accuracies of 93.07%, 94.8%, 67.5%, and 91.1% respectively and our CNN model resulted in 94.6%.


2021 ◽  
Author(s):  
kaiwen wu ◽  
Bo Xu ◽  
Ying Wu

Abstract Manual recognition of breast ultrasound images is a heavy workload for radiologists and misdiagnosis. Traditional machine learning methods and deep learning methods require huge data sets and a lot of time for training. To solve the above problems, this paper had proposed a deep transfer learning method. the transfer learning models ResNet18 and ResNet50 after pre-training on the ImageNet dataset, and the ResNet18 and ResNet50 models without pre-training. The dataset consists of 131 breast ultrasound images (109 benign and 22 malignant), all of which had been collected, labeled and provided by UDIAT Diagnostic Center. The experimental results had shown that the pre-trained ResNet18 model has the best classification performance on breast ultrasound images. It had achieved an accuracy of 93.9%, an F1score of 0.94, and an area under the receiver operating characteristic curve (AUC) of 0.944. Compared with ordinary deep learning models, its classification performance had been greatly improved, which had proved the significant advantages of deep transfer learning in the classification of small samples of medical images.


2021 ◽  
Vol 2071 (1) ◽  
pp. 012003
Author(s):  
M A Markom ◽  
S Mohd Taha ◽  
A H Adom ◽  
A S Abdull Sukor ◽  
A S Abdul Nasir ◽  
...  

Abstract COVID19 chest X-ray has been used as supplementary tools to support COVID19 severity level diagnosis. However, there are challenges that required to face by researchers around the world in order to implement these chest X-ray samples to be very helpful to detect the disease. Here, this paper presents a review of COVID19 chest X-ray classification using deep learning approach. This study is conducted to discuss the source of images and deep learning models as well as its performances. At the end of this paper, the challenges and future work on COVID19 chest X-ray are discussed and proposed.


Author(s):  
Halgurd Maghdid ◽  
Aras T. Asaad ◽  
Kayhan Zrar Ghafoor Ghafoor ◽  
Ali S. Sadiq ◽  
Seyedali Mirjalili ◽  
...  

Author(s):  
Yuejun Liu ◽  
Yifei Xu ◽  
Xiangzheng Meng ◽  
Xuguang Wang ◽  
Tianxu Bai

Background: Medical imaging plays an important role in the diagnosis of thyroid diseases. In the field of machine learning, multiple dimensional deep learning algorithms are widely used in image classification and recognition, and have achieved great success. Objective: The method based on multiple dimensional deep learning is employed for the auxiliary diagnosis of thyroid diseases based on SPECT images. The performances of different deep learning models are evaluated and compared. Methods: Thyroid SPECT images are collected with three types, they are hyperthyroidism, normal and hypothyroidism. In the pre-processing, the region of interest of thyroid is segmented and the amount of data sample is expanded. Four CNN models, including CNN, Inception, VGG16 and RNN, are used to evaluate deep learning methods. Results: Deep learning based methods have good classification performance, the accuracy is 92.9%-96.2%, AUC is 97.8%-99.6%. VGG16 model has the best performance, the accuracy is 96.2% and AUC is 99.6%. Especially, the VGG16 model with a changing learning rate works best. Conclusion: The standard CNN, Inception, VGG16, and RNN four deep learning models are efficient for the classification of thyroid diseases with SPECT images. The accuracy of the assisted diagnostic method based on deep learning is higher than that of other methods reported in the literature.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Shan Guleria ◽  
Tilak U. Shah ◽  
J. Vincent Pulido ◽  
Matthew Fasullo ◽  
Lubaina Ehsan ◽  
...  

AbstractProbe-based confocal laser endomicroscopy (pCLE) allows for real-time diagnosis of dysplasia and cancer in Barrett’s esophagus (BE) but is limited by low sensitivity. Even the gold standard of histopathology is hindered by poor agreement between pathologists. We deployed deep-learning-based image and video analysis in order to improve diagnostic accuracy of pCLE videos and biopsy images. Blinded experts categorized biopsies and pCLE videos as squamous, non-dysplastic BE, or dysplasia/cancer, and deep learning models were trained to classify the data into these three categories. Biopsy classification was conducted using two distinct approaches—a patch-level model and a whole-slide-image-level model. Gradient-weighted class activation maps (Grad-CAMs) were extracted from pCLE and biopsy models in order to determine tissue structures deemed relevant by the models. 1970 pCLE videos, 897,931 biopsy patches, and 387 whole-slide images were used to train, test, and validate the models. In pCLE analysis, models achieved a high sensitivity for dysplasia (71%) and an overall accuracy of 90% for all classes. For biopsies at the patch level, the model achieved a sensitivity of 72% for dysplasia and an overall accuracy of 90%. The whole-slide-image-level model achieved a sensitivity of 90% for dysplasia and 94% overall accuracy. Grad-CAMs for all models showed activation in medically relevant tissue regions. Our deep learning models achieved high diagnostic accuracy for both pCLE-based and histopathologic diagnosis of esophageal dysplasia and its precursors, similar to human accuracy in prior studies. These machine learning approaches may improve accuracy and efficiency of current screening protocols.


Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4595
Author(s):  
Parisa Asadi ◽  
Lauren E. Beckingham

X-ray CT imaging provides a 3D view of a sample and is a powerful tool for investigating the internal features of porous rock. Reliable phase segmentation in these images is highly necessary but, like any other digital rock imaging technique, is time-consuming, labor-intensive, and subjective. Combining 3D X-ray CT imaging with machine learning methods that can simultaneously consider several extracted features in addition to color attenuation, is a promising and powerful method for reliable phase segmentation. Machine learning-based phase segmentation of X-ray CT images enables faster data collection and interpretation than traditional methods. This study investigates the performance of several filtering techniques with three machine learning methods and a deep learning method to assess the potential for reliable feature extraction and pixel-level phase segmentation of X-ray CT images. Features were first extracted from images using well-known filters and from the second convolutional layer of the pre-trained VGG16 architecture. Then, K-means clustering, Random Forest, and Feed Forward Artificial Neural Network methods, as well as the modified U-Net model, were applied to the extracted input features. The models’ performances were then compared and contrasted to determine the influence of the machine learning method and input features on reliable phase segmentation. The results showed considering more dimensionality has promising results and all classification algorithms result in high accuracy ranging from 0.87 to 0.94. Feature-based Random Forest demonstrated the best performance among the machine learning models, with an accuracy of 0.88 for Mancos and 0.94 for Marcellus. The U-Net model with the linear combination of focal and dice loss also performed well with an accuracy of 0.91 and 0.93 for Mancos and Marcellus, respectively. In general, considering more features provided promising and reliable segmentation results that are valuable for analyzing the composition of dense samples, such as shales, which are significant unconventional reservoirs in oil recovery.


Sign in / Sign up

Export Citation Format

Share Document