scholarly journals Classification of X-ray images into COVID-19, pneumonia, and TB using cGAN and fine-tuned deep transfer learning models

Author(s):  
Tirth Mehta ◽  
Ninad Mehendale
2021 ◽  
Vol 11 (9) ◽  
pp. 4233
Author(s):  
Biprodip Pal ◽  
Debashis Gupta ◽  
Md. Rashed-Al-Mahfuz ◽  
Salem A. Alyami ◽  
Mohammad Ali Moni

The COVID-19 pandemic requires the rapid isolation of infected patients. Thus, high-sensitivity radiology images could be a key technique to diagnose patients besides the polymerase chain reaction approach. Deep learning algorithms are proposed in several studies to detect COVID-19 symptoms due to the success in chest radiography image classification, cost efficiency, lack of expert radiologists, and the need for faster processing in the pandemic area. Most of the promising algorithms proposed in different studies are based on pre-trained deep learning models. Such open-source models and lack of variation in the radiology image-capturing environment make the diagnosis system vulnerable to adversarial attacks such as fast gradient sign method (FGSM) attack. This study therefore explored the potential vulnerability of pre-trained convolutional neural network algorithms to the FGSM attack in terms of two frequently used models, VGG16 and Inception-v3. Firstly, we developed two transfer learning models for X-ray and CT image-based COVID-19 classification and analyzed the performance extensively in terms of accuracy, precision, recall, and AUC. Secondly, our study illustrates that misclassification can occur with a very minor perturbation magnitude, such as 0.009 and 0.003 for the FGSM attack in these models for X-ray and CT images, respectively, without any effect on the visual perceptibility of the perturbation. In addition, we demonstrated that successful FGSM attack can decrease the classification performance to 16.67% and 55.56% for X-ray images, as well as 36% and 40% in the case of CT images for VGG16 and Inception-v3, respectively, without any human-recognizable perturbation effects in the adversarial images. Finally, we analyzed that correct class probability of any test image which is supposed to be 1, can drop for both considered models and with increased perturbation; it can drop to 0.24 and 0.17 for the VGG16 model in cases of X-ray and CT images, respectively. Thus, despite the need for data sharing and automated diagnosis, practical deployment of such program requires more robustness.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5813
Author(s):  
Muhammad Umair ◽  
Muhammad Shahbaz Khan ◽  
Fawad Ahmed ◽  
Fatmah Baothman ◽  
Fehaid Alqahtani ◽  
...  

The COVID-19 outbreak began in December 2019 and has dreadfully affected our lives since then. More than three million lives have been engulfed by this newest member of the corona virus family. With the emergence of continuously mutating variants of this virus, it is still indispensable to successfully diagnose the virus at early stages. Although the primary technique for the diagnosis is the PCR test, the non-contact methods utilizing the chest radiographs and CT scans are always preferred. Artificial intelligence, in this regard, plays an essential role in the early and accurate detection of COVID-19 using pulmonary images. In this research, a transfer learning technique with fine tuning was utilized for the detection and classification of COVID-19. Four pre-trained models i.e., VGG16, DenseNet-121, ResNet-50, and MobileNet were used. The aforementioned deep neural networks were trained using the dataset (available on Kaggle) of 7232 (COVID-19 and normal) chest X-ray images. An indigenous dataset of 450 chest X-ray images of Pakistani patients was collected and used for testing and prediction purposes. Various important parameters, e.g., recall, specificity, F1-score, precision, loss graphs, and confusion matrices were calculated to validate the accuracy of the models. The achieved accuracies of VGG16, ResNet-50, DenseNet-121, and MobileNet are 83.27%, 92.48%, 96.49%, and 96.48%, respectively. In order to display feature maps that depict the decomposition process of an input image into various filters, a visualization of the intermediate activations is performed. Finally, the Grad-CAM technique was applied to create class-specific heatmap images in order to highlight the features extracted in the X-ray images. Various optimizers were used for error minimization purposes. DenseNet-121 outperformed the other three models in terms of both accuracy and prediction.


Author(s):  
Muhammad Nur Aiman Shapiee ◽  
Muhammad Ar Rahim Ibrahim ◽  
Mohd Azraai Mohd Razman ◽  
Muhammad Amirul Abdullah ◽  
Rabiu Muazu Musa ◽  
...  

Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 427 ◽  
Author(s):  
Laith Alzubaidi ◽  
Mohammed A. Fadhel ◽  
Omran Al-Shamma ◽  
Jinglan Zhang ◽  
Ye Duan

Sickle cell anemia, which is also called sickle cell disease (SCD), is a hematological disorder that causes occlusion in blood vessels, leading to hurtful episodes and even death. The key function of red blood cells (erythrocytes) is to supply all the parts of the human body with oxygen. Red blood cells (RBCs) form a crescent or sickle shape when sickle cell anemia affects them. This abnormal shape makes it difficult for sickle cells to move through the bloodstream, hence decreasing the oxygen flow. The precise classification of RBCs is the first step toward accurate diagnosis, which aids in evaluating the danger level of sickle cell anemia. The manual classification methods of erythrocytes require immense time, and it is possible that errors may be made throughout the classification stage. Traditional computer-aided techniques, which have been employed for erythrocyte classification, are based on handcrafted features techniques, and their performance relies on the selected features. They also are very sensitive to different sizes, colors, and complex shapes. However, microscopy images of erythrocytes are very complex in shape with different sizes. To this end, this research proposes lightweight deep learning models that classify the erythrocytes into three classes: circular (normal), elongated (sickle cells), and other blood content. These models are different in the number of layers and learnable filters. The available datasets of red blood cells with sickle cell disease are very small for training deep learning models. Therefore, addressing the lack of training data is the main aim of this paper. To tackle this issue and optimize the performance, the transfer learning technique is utilized. Transfer learning does not significantly affect performance on medical image tasks when the source domain is completely different from the target domain. In some cases, it can degrade the performance. Hence, we have applied the same domain transfer learning, unlike other methods that used the ImageNet dataset for transfer learning. To minimize the overfitting effect, we have utilized several data augmentation techniques. Our model obtained state-of-the-art performance and outperformed the latest methods by achieving an accuracy of 99.54% with our model and 99.98% with our model plus a multiclass SVM classifier on the erythrocytesIDB dataset and 98.87% on the collected dataset.


2021 ◽  
Vol 7 ◽  
pp. e680
Author(s):  
Muhammad Amirul Abdullah ◽  
Muhammad Ar Rahim Ibrahim ◽  
Muhammad Nur Aiman Shapiee ◽  
Muhammad Aizzat Zakaria ◽  
Mohd Azraai Mohd Razman ◽  
...  

This study aims at classifying flat ground tricks, namely Ollie, Kickflip, Shove-it, Nollie and Frontside 180, through the identification of significant input image transformation on different transfer learning models with optimized Support Vector Machine (SVM) classifier. A total of six amateur skateboarders (20 ± 7 years of age with at least 5.0 years of experience) executed five tricks for each type of trick repeatedly on a customized ORY skateboard (IMU sensor fused) on a cemented ground. From the IMU data, a total of six raw signals extracted. A total of two input image type, namely raw data (RAW) and Continous Wavelet Transform (CWT), as well as six transfer learning models from three different families along with grid-searched optimized SVM, were investigated towards its efficacy in classifying the skateboarding tricks. It was shown from the study that RAW and CWT input images on MobileNet, MobileNetV2 and ResNet101 transfer learning models demonstrated the best test accuracy at 100% on the test dataset. Nonetheless, by evaluating the computational time amongst the best models, it was established that the CWT-MobileNet-Optimized SVM pipeline was found to be the best. It could be concluded that the proposed method is able to facilitate the judges as well as coaches in identifying skateboarding tricks execution.


Medical imaging plays an important role in the diagnosis of some critical diseases and further treatment process of patients. Brain is a central and most complex structure in the human body that works with billions of cells, which controls all other organ functioning. Brain tumours observed as uncontrolled abnormal cell growth in brain tissues. Classification of such cells in a early stage will increase the survival rate of the patient. Machine learning algorithms have contributed much in automation of such tasks. Further improvement in prediction rate is possible through deep learning models. In this paper presents experiments by deep transfer learning models on publicly available dataset for Brain tumour classification. Pre-trained plain and residual feed forward models such as Alexnet, VGG19, ResNet50, ResNet101 and GoogleNet are used for the purpose of feature extraction, Fully connected layers and softmax layer for classification is used commonly. The evaluation metrics Accuracy, Sensitivity, Specificity and F1-Score were computed.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Manjit Kaur ◽  
Vijay Kumar ◽  
Vaishali Yadav ◽  
Dilbag Singh ◽  
Naresh Kumar ◽  
...  

COVID-19 has affected the whole world drastically. A huge number of people have lost their lives due to this pandemic. Early detection of COVID-19 infection is helpful for treatment and quarantine. Therefore, many researchers have designed a deep learning model for the early diagnosis of COVID-19-infected patients. However, deep learning models suffer from overfitting and hyperparameter-tuning issues. To overcome these issues, in this paper, a metaheuristic-based deep COVID-19 screening model is proposed for X-ray images. The modified AlexNet architecture is used for feature extraction and classification of the input images. Strength Pareto evolutionary algorithm-II (SPEA-II) is used to tune the hyperparameters of modified AlexNet. The proposed model is tested on a four-class (i.e., COVID-19, tuberculosis, pneumonia, or healthy) dataset. Finally, the comparisons are drawn among the existing and the proposed models.


Sign in / Sign up

Export Citation Format

Share Document