Coronavirus and Pneumonia Detection from X-Ray Images Harnessing Deep Learning and Transfer Learning Techniques

Author(s):  
Aseem Sangalay ◽  
Natasha Srivastava ◽  
Shweta Meena
Author(s):  
Arshia Rehman ◽  
Saeeda Naz ◽  
Ahmed Khan ◽  
Ahmad Zaib ◽  
Imran Razzak

AbstractBackgroundCoronavirus disease (COVID-19) is an infectious disease caused by a new virus. Exponential growth is not only threatening lives, but also impacting businesses and disrupting travel around the world.AimThe aim of this work is to develop an efficient diagnosis of COVID-19 disease by differentiating it from viral pneumonia, bacterial pneumonia and healthy cases using deep learning techniques.MethodIn this work, we have used pre-trained knowledge to improve the diagnostic performance using transfer learning techniques and compared the performance different CNN architectures.ResultsEvaluation results using K-fold (10) showed that we have achieved state of the art performance with overall accuracy of 98.75% on the perspective of CT and X-ray cases as a whole.ConclusionQuantitative evaluation showed high accuracy for automatic diagnosis of COVID-19. Pre-trained deep learning models develop in this study could be used early screening of coronavirus, however it calls for extensive need to CT or X-rays dataset to develop a reliable application.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Dandi Yang ◽  
Cristhian Martinez ◽  
Lara Visuña ◽  
Hardev Khandhar ◽  
Chintan Bhatt ◽  
...  

AbstractThe main purpose of this work is to investigate and compare several deep learning enhanced techniques applied to X-ray and CT-scan medical images for the detection of COVID-19. In this paper, we used four powerful pre-trained CNN models, VGG16, DenseNet121, ResNet50,and ResNet152, for the COVID-19 CT-scan binary classification task. The proposed Fast.AI ResNet framework was designed to find out the best architecture, pre-processing, and training parameters for the models largely automatically. The accuracy and F1-score were both above 96% in the diagnosis of COVID-19 using CT-scan images. In addition, we applied transfer learning techniques to overcome the insufficient data and to improve the training time. The binary and multi-class classification of X-ray images tasks were performed by utilizing enhanced VGG16 deep transfer learning architecture. High accuracy of 99% was achieved by enhanced VGG16 in the detection of X-ray images from COVID-19 and pneumonia. The accuracy and validity of the algorithms were assessed on X-ray and CT-scan well-known public datasets. The proposed methods have better results for COVID-19 diagnosis than other related in literature. In our opinion, our work can help virologists and radiologists to make a better and faster diagnosis in the struggle against the outbreak of COVID-19.


2021 ◽  
Vol 11 (9) ◽  
pp. 4233
Author(s):  
Biprodip Pal ◽  
Debashis Gupta ◽  
Md. Rashed-Al-Mahfuz ◽  
Salem A. Alyami ◽  
Mohammad Ali Moni

The COVID-19 pandemic requires the rapid isolation of infected patients. Thus, high-sensitivity radiology images could be a key technique to diagnose patients besides the polymerase chain reaction approach. Deep learning algorithms are proposed in several studies to detect COVID-19 symptoms due to the success in chest radiography image classification, cost efficiency, lack of expert radiologists, and the need for faster processing in the pandemic area. Most of the promising algorithms proposed in different studies are based on pre-trained deep learning models. Such open-source models and lack of variation in the radiology image-capturing environment make the diagnosis system vulnerable to adversarial attacks such as fast gradient sign method (FGSM) attack. This study therefore explored the potential vulnerability of pre-trained convolutional neural network algorithms to the FGSM attack in terms of two frequently used models, VGG16 and Inception-v3. Firstly, we developed two transfer learning models for X-ray and CT image-based COVID-19 classification and analyzed the performance extensively in terms of accuracy, precision, recall, and AUC. Secondly, our study illustrates that misclassification can occur with a very minor perturbation magnitude, such as 0.009 and 0.003 for the FGSM attack in these models for X-ray and CT images, respectively, without any effect on the visual perceptibility of the perturbation. In addition, we demonstrated that successful FGSM attack can decrease the classification performance to 16.67% and 55.56% for X-ray images, as well as 36% and 40% in the case of CT images for VGG16 and Inception-v3, respectively, without any human-recognizable perturbation effects in the adversarial images. Finally, we analyzed that correct class probability of any test image which is supposed to be 1, can drop for both considered models and with increased perturbation; it can drop to 0.24 and 0.17 for the VGG16 model in cases of X-ray and CT images, respectively. Thus, despite the need for data sharing and automated diagnosis, practical deployment of such program requires more robustness.


Measurement ◽  
2021 ◽  
pp. 109953
Author(s):  
Adhiyaman Manickam ◽  
Jianmin Jiang ◽  
Yu Zhou ◽  
Abhinav Sagar ◽  
Rajkumar Soundrapandiyan ◽  
...  

2022 ◽  
pp. 1-12
Author(s):  
Amin Ul Haq ◽  
Jian Ping Li ◽  
Samad Wali ◽  
Sultan Ahmad ◽  
Zafar Ali ◽  
...  

Artificial intelligence (AI) based computer-aided diagnostic (CAD) systems can effectively diagnose critical disease. AI-based detection of breast cancer (BC) through images data is more efficient and accurate than professional radiologists. However, the existing AI-based BC diagnosis methods have complexity in low prediction accuracy and high computation time. Due to these reasons, medical professionals are not employing the current proposed techniques in E-Healthcare to effectively diagnose the BC. To diagnose the breast cancer effectively need to incorporate advanced AI techniques based methods in diagnosis process. In this work, we proposed a deep learning based diagnosis method (StackBC) to detect breast cancer in the early stage for effective treatment and recovery. In particular, we have incorporated deep learning models including Convolutional neural network (CNN), Long short term memory (LSTM), and Gated recurrent unit (GRU) for the classification of Invasive Ductal Carcinoma (IDC). Additionally, data augmentation and transfer learning techniques have been incorporated for data set balancing and for effective training the model. To further improve the predictive performance of model we used stacking technique. Among the three base classifiers (CNN, LSTM, GRU) the predictive performance of GRU are better as compared to individual model. The GRU is selected as a meta classifier to distinguish between Non-IDC and IDC breast images. The method Hold-Out has been incorporated and the data set is split into 90% and 10% for training and testing of the model, respectively. Model evaluation metrics have been computed for model performance evaluation. To analyze the efficacy of the model, we have used breast histology images data set. Our experimental results demonstrated that the proposed StackBC method achieved improved performance by gaining 99.02% accuracy and 100% area under the receiver operating characteristics curve (AUC-ROC) compared to state-of-the-art methods. Due to the high performance of the proposed method, we recommend it for early recognition of breast cancer in E-Healthcare.


Author(s):  
Akshay Raina ◽  
Shubham Mahajan ◽  
Ch. Vanipriya ◽  
Anil Bhardwaj ◽  
Amit Kant Pandit

2020 ◽  
Vol 10 (4) ◽  
pp. 213 ◽  
Author(s):  
Ki-Sun Lee ◽  
Jae Young Kim ◽  
Eun-tae Jeon ◽  
Won Suk Choi ◽  
Nan Hee Kim ◽  
...  

According to recent studies, patients with COVID-19 have different feature characteristics on chest X-ray (CXR) than those with other lung diseases. This study aimed at evaluating the layer depths and degree of fine-tuning on transfer learning with a deep convolutional neural network (CNN)-based COVID-19 screening in CXR to identify efficient transfer learning strategies. The CXR images used in this study were collected from publicly available repositories, and the collected images were classified into three classes: COVID-19, pneumonia, and normal. To evaluate the effect of layer depths of the same CNN architecture, CNNs called VGG-16 and VGG-19 were used as backbone networks. Then, each backbone network was trained with different degrees of fine-tuning and comparatively evaluated. The experimental results showed the highest AUC value to be 0.950 concerning COVID-19 classification in the experimental group of a fine-tuned with only 2/5 blocks of the VGG16 backbone network. In conclusion, in the classification of medical images with a limited number of data, a deeper layer depth may not guarantee better results. In addition, even if the same pre-trained CNN architecture is used, an appropriate degree of fine-tuning can help to build an efficient deep learning model.


Sign in / Sign up

Export Citation Format

Share Document