Hybrid Machine and Deep Transfer Learning Based Classification Models for Covid 19 and Pneumonia Diagnosis Using X-ray Images

Author(s):  
Alassane Bonkano Abdoul-Razak ◽  
Mounia Mikram ◽  
Maryem Rhanoui ◽  
Sanaa Ghouzali
2020 ◽  
Author(s):  
Iason Katsamenis ◽  
Eftychios Protopapadakis ◽  
Athanasios Voulodimos ◽  
Anastasios Doulamis ◽  
Nikolaos Doulamis

We introduce a deep learning framework that can detect COVID-19 pneumonia in thoracic radiographs, as well as differentiate it from bacterial pneumonia infection. Deep classification models, such as convolutional neural networks (CNNs), require large-scale datasets in order to be trained and perform properly. Since the number of X-ray samples related to COVID-19 is limited, transfer learning (TL) appears as the go-to method to alleviate the demand for training data and develop accurate automated diagnosis models. In this context, networks are able to gain knowledge from pretrained networks on large-scale image datasets or alternative data-rich sources (i.e. bacterial and viral pneumonia radiographs). The experimental results indicate that the TL approach outperforms the performance obtained without TL, for the COVID-19 classification task in chest X-ray images.


2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Arun Sharma ◽  
Sheeba Rani ◽  
Dinesh Gupta

The ongoing pandemic of coronavirus disease 2019 (COVID-19) has led to global health and healthcare crisis, apart from the tremendous socioeconomic effects. One of the significant challenges in this crisis is to identify and monitor the COVID-19 patients quickly and efficiently to facilitate timely decisions for their treatment, monitoring, and management. Research efforts are on to develop less time-consuming methods to replace or to supplement RT-PCR-based methods. The present study is aimed at creating efficient deep learning models, trained with chest X-ray images, for rapid screening of COVID-19 patients. We used publicly available PA chest X-ray images of adult COVID-19 patients for the development of Artificial Intelligence (AI)-based classification models for COVID-19 and other major infectious diseases. To increase the dataset size and develop generalized models, we performed 25 different types of augmentations on the original images. Furthermore, we utilized the transfer learning approach for the training and testing of the classification models. The combination of two best-performing models (each trained on 286 images, rotated through 120° or 140° angle) displayed the highest prediction accuracy for normal, COVID-19, non-COVID-19, pneumonia, and tuberculosis images. AI-based classification models trained through the transfer learning approach can efficiently classify the chest X-ray images representing studied diseases. Our method is more efficient than previously published methods. It is one step ahead towards the implementation of AI-based methods for classification problems in biomedical imaging related to COVID-19.


2021 ◽  
pp. 1-14
Author(s):  
Prabira Kumar Sethy ◽  
Santi Kumari Behera ◽  
Komma Anitha ◽  
Chanki Pandey ◽  
M.R. Khan

The objective of this study is to conduct a critical analysis to investigate and compare a group of computer aid screening methods of COVID-19 using chest X-ray images and computed tomography (CT) images. The computer aid screening method includes deep feature extraction, transfer learning, and machine learning image classification approach. The deep feature extraction and transfer learning method considered 13 pre-trained CNN models. The machine learning approach includes three sets of handcrafted features and three classifiers. The pre-trained CNN models include AlexNet, GoogleNet, VGG16, VGG19, Densenet201, Resnet18, Resnet50, Resnet101, Inceptionv3, Inceptionresnetv2, Xception, MobileNetv2 and ShuffleNet. The handcrafted features are GLCM, LBP & HOG, and machine learning based classifiers are KNN, SVM & Naive Bayes. In addition, the different paradigms of classifiers are also analyzed. Overall, the comparative analysis is carried out in 65 classification models, i.e., 13 in deep feature extraction, 13 in transfer learning, and 39 in the machine learning approaches. Finally, all classification models perform better when applying to the chest X-ray image set as comparing to the use of CT scan image set. Among 65 classification models, the VGG19 with SVM achieved the highest accuracy of 99.81%when applying to the chest X-ray images. In conclusion, the findings of this analysis study are beneficial for the researchers who are working towards designing computer aid tools for screening COVID-19 infection diseases.


2021 ◽  
Vol 173 ◽  
pp. 114677
Author(s):  
Plácido L. Vidal ◽  
Joaquim de Moura ◽  
Jorge Novo ◽  
Marcos Ortega

2021 ◽  
Vol 11 (9) ◽  
pp. 4233
Author(s):  
Biprodip Pal ◽  
Debashis Gupta ◽  
Md. Rashed-Al-Mahfuz ◽  
Salem A. Alyami ◽  
Mohammad Ali Moni

The COVID-19 pandemic requires the rapid isolation of infected patients. Thus, high-sensitivity radiology images could be a key technique to diagnose patients besides the polymerase chain reaction approach. Deep learning algorithms are proposed in several studies to detect COVID-19 symptoms due to the success in chest radiography image classification, cost efficiency, lack of expert radiologists, and the need for faster processing in the pandemic area. Most of the promising algorithms proposed in different studies are based on pre-trained deep learning models. Such open-source models and lack of variation in the radiology image-capturing environment make the diagnosis system vulnerable to adversarial attacks such as fast gradient sign method (FGSM) attack. This study therefore explored the potential vulnerability of pre-trained convolutional neural network algorithms to the FGSM attack in terms of two frequently used models, VGG16 and Inception-v3. Firstly, we developed two transfer learning models for X-ray and CT image-based COVID-19 classification and analyzed the performance extensively in terms of accuracy, precision, recall, and AUC. Secondly, our study illustrates that misclassification can occur with a very minor perturbation magnitude, such as 0.009 and 0.003 for the FGSM attack in these models for X-ray and CT images, respectively, without any effect on the visual perceptibility of the perturbation. In addition, we demonstrated that successful FGSM attack can decrease the classification performance to 16.67% and 55.56% for X-ray images, as well as 36% and 40% in the case of CT images for VGG16 and Inception-v3, respectively, without any human-recognizable perturbation effects in the adversarial images. Finally, we analyzed that correct class probability of any test image which is supposed to be 1, can drop for both considered models and with increased perturbation; it can drop to 0.24 and 0.17 for the VGG16 model in cases of X-ray and CT images, respectively. Thus, despite the need for data sharing and automated diagnosis, practical deployment of such program requires more robustness.


Sign in / Sign up

Export Citation Format

Share Document