scholarly journals The Effectiveness of Image Augmentation in Deep Learning Networks for Detecting COVID-19: A Geometric Transformation Perspective

2021 ◽  
Vol 8 ◽  
Author(s):  
Mohamed Elgendi ◽  
Muhammad Umer Nasir ◽  
Qunfeng Tang ◽  
David Smith ◽  
John-Paul Grenier ◽  
...  

Chest X-ray imaging technology used for the early detection and screening of COVID-19 pneumonia is both accessible worldwide and affordable compared to other non-invasive technologies. Additionally, deep learning methods have recently shown remarkable results in detecting COVID-19 on chest X-rays, making it a promising screening technology for COVID-19. Deep learning relies on a large amount of data to avoid overfitting. While overfitting can result in perfect modeling on the original training dataset, on a new testing dataset it can fail to achieve high accuracy. In the image processing field, an image augmentation step (i.e., adding more training data) is often used to reduce overfitting on the training dataset, and improve prediction accuracy on the testing dataset. In this paper, we examined the impact of geometric augmentations as implemented in several recent publications for detecting COVID-19. We compared the performance of 17 deep learning algorithms with and without different geometric augmentations. We empirically examined the influence of augmentation with respect to detection accuracy, dataset diversity, augmentation methodology, and network size. Contrary to expectation, our results show that the removal of recently used geometrical augmentation steps actually improved the Matthews correlation coefficient (MCC) of 17 models. The MCC without augmentation (MCC = 0.51) outperformed four recent geometrical augmentations (MCC = 0.47 for Data Augmentation 1, MCC = 0.44 for Data Augmentation 2, MCC = 0.48 for Data Augmentation 3, and MCC = 0.49 for Data Augmentation 4). When we retrained a recently published deep learning without augmentation on the same dataset, the detection accuracy significantly increased, with a χMcNemar′s statistic2=163.2 and a p-value of 2.23 × 10−37. This is an interesting finding that may improve current deep learning algorithms using geometrical augmentations for detecting COVID-19. We also provide clinical perspectives on geometric augmentation to consider regarding the development of a robust COVID-19 X-ray-based detector.

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Mundher Mohammed Taresh ◽  
Ningbo Zhu ◽  
Talal Ahmed Ali Ali ◽  
Asaad Shakir Hameed ◽  
Modhi Lafta Mutar

The novel coronavirus disease 2019 (COVID-19) is a contagious disease that has caused thousands of deaths and infected millions worldwide. Thus, various technologies that allow for the fast detection of COVID-19 infections with high accuracy can offer healthcare professionals much-needed help. This study is aimed at evaluating the effectiveness of the state-of-the-art pretrained Convolutional Neural Networks (CNNs) on the automatic diagnosis of COVID-19 from chest X-rays (CXRs). The dataset used in the experiments consists of 1200 CXR images from individuals with COVID-19, 1345 CXR images from individuals with viral pneumonia, and 1341 CXR images from healthy individuals. In this paper, the effectiveness of artificial intelligence (AI) in the rapid and precise identification of COVID-19 from CXR images has been explored based on different pretrained deep learning algorithms and fine-tuned to maximise detection accuracy to identify the best algorithms. The results showed that deep learning with X-ray imaging is useful in collecting critical biological markers associated with COVID-19 infections. VGG16 and MobileNet obtained the highest accuracy of 98.28%. However, VGG16 outperformed all other models in COVID-19 detection with an accuracy, F1 score, precision, specificity, and sensitivity of 98.72%, 97.59%, 96.43%, 98.70%, and 98.78%, respectively. The outstanding performance of these pretrained models can significantly improve the speed and accuracy of COVID-19 diagnosis. However, a larger dataset of COVID-19 X-ray images is required for a more accurate and reliable identification of COVID-19 infections when using deep transfer learning. This would be extremely beneficial in this pandemic when the disease burden and the need for preventive measures are in conflict with the currently available resources.


Diagnostics ◽  
2020 ◽  
Vol 10 (6) ◽  
pp. 417 ◽  
Author(s):  
Mohammad Farukh Hashmi ◽  
Satyarth Katiyar ◽  
Avinash G Keskar ◽  
Neeraj Dhanraj Bokde ◽  
Zong Woo Geem

Pneumonia causes the death of around 700,000 children every year and affects 7% of the global population. Chest X-rays are primarily used for the diagnosis of this disease. However, even for a trained radiologist, it is a challenging task to examine chest X-rays. There is a need to improve the diagnosis accuracy. In this work, an efficient model for the detection of pneumonia trained on digital chest X-ray images is proposed, which could aid the radiologists in their decision making process. A novel approach based on a weighted classifier is introduced, which combines the weighted predictions from the state-of-the-art deep learning models such as ResNet18, Xception, InceptionV3, DenseNet121, and MobileNetV3 in an optimal way. This approach is a supervised learning approach in which the network predicts the result based on the quality of the dataset used. Transfer learning is used to fine-tune the deep learning models to obtain higher training and validation accuracy. Partial data augmentation techniques are employed to increase the training dataset in a balanced way. The proposed weighted classifier is able to outperform all the individual models. Finally, the model is evaluated, not only in terms of test accuracy, but also in the AUC score. The final proposed weighted classifier model is able to achieve a test accuracy of 98.43% and an AUC score of 99.76 on the unseen data from the Guangzhou Women and Children’s Medical Center pneumonia dataset. Hence, the proposed model can be used for a quick diagnosis of pneumonia and can aid the radiologists in the diagnosis process.


2021 ◽  
pp. 1-11
Author(s):  
Sunil Rao ◽  
Vivek Narayanaswamy ◽  
Michael Esposito ◽  
Jayaraman J. Thiagarajan ◽  
Andreas Spanias

Reliable and rapid non-invasive testing has become essential for COVID-19 diagnosis and tracking statistics. Recent studies motivate the use of modern machine learning (ML) and deep learning (DL) tools that utilize features of coughing sounds for COVID-19 diagnosis. In this paper, we describe system designs that we developed for COVID-19 cough detection with the long-term objective of embedding them in a testing device. More specifically, we use log-mel spectrogram features extracted from the coughing audio signal and design a series of customized deep learning algorithms to develop fast and automated diagnosis tools for COVID-19 detection. We first explore the use of a deep neural network with fully connected layers. Additionally, we investigate prospects of efficient implementation by examining the impact on the detection performance by pruning the fully connected neural network based on the Lottery Ticket Hypothesis (LTH) optimization process. In general, pruned neural networks have been shown to provide similar performance gains to that of unpruned networks with reduced computational complexity in a variety of signal processing applications. Finally, we investigate the use of convolutional neural network architectures and in particular the VGG-13 architecture which we tune specifically for this application. Our results show that a unique ensembling of the VGG-13 architecture trained using a combination of binary cross entropy and focal losses with data augmentation significantly outperforms the fully connected networks and other recently proposed baselines on the DiCOVA 2021 COVID-19 cough audio dataset. Our customized VGG-13 model achieves an average validation AUROC of 82.23% and a test AUROC of 78.3% at a sensitivity of 80.49%.


2020 ◽  
Vol 41 (Supplement_2) ◽  
Author(s):  
T Matsumoto ◽  
S Kodera ◽  
H Shinohara ◽  
A Kiyosue ◽  
Y Higashikuni ◽  
...  

Abstract   The development of deep learning technology has enabled machines to achieve high-level accuracy in interpreting medical images. While many previous studies have examined the detection of pulmonary nodules and cardiomegaly in chest X-rays using deep learning, the application of this technology to heart failure remains rare. In this study, we investigated the performance of a deep learning algorithm in terms of diagnosing heart failure using images obtained from chest X-rays. We used 952 chest X-ray images from a labeled database published by the National Institutes of Health. Two cardiologists respectively verified and relabeled these images, for a total of 260 “normal” and 378 “heart failure” images, and the remainder were discarded because they had been incorrectly labeled. In this study “heart failure” was defined as “cardiomegaly or congestion”, in a chest X-ray with cardiothoracic ratio (CTR) over 50% or radiographic presence of pulmonary edema. To enable the machine to extract a sufficient number of features from the images, we used the general machine learning approach called data augmentation and transfer learning. Owing mostly to this technique and the adequate relabeling process, we established a model to detect heart failure in chest X-ray by applying deep learning, and obtained an accuracy of 82%. Sensitivity and specificity to heart failure were 75% and 94.4%, respectively. Furthermore, heatmap imaging allowed us to visualize decisions made by the machine. The figure shows randomly selected examples of the prediction probabilities and heatmaps of the chest X-rays from the dataset. The original image is on the left and its heatmap is on the right, with its prediction probability written below. The red areas on the heatmaps show important regions, according to which the machine determined the classification. While some images with ambiguous radiolucency such as (e) and (f) were prone to be misdiagnosed by this model, most of the images like (a)–(d) were diagnosed correctly. Deep learning can thus help support the diagnosis of heart failure using chest X-ray images. Heatmaps and probabilities of prediction Funding Acknowledgement Type of funding source: Public grant(s) – National budget only. Main funding source(s): JSPS KAKENHI


2021 ◽  
Author(s):  
Amran Hossain ◽  
Mohammad Tariqul Islam ◽  
Ali F. Almutairi

Abstract Automated classification and detection of brain abnormalities like tumors from microwave head images is essential for investigating and monitoring disease progression. This paper presents the automatic classification and detection of human brain abnormalities through the deep learning-based YOLOv5 model in microwave head images. The YOLOv5 is a faster object detection model, which has a less computational architecture with high accuracy. At the beginning, the backscattered signals are collected from the implemented 3D wideband nine antennas array-based microwave head imaging (MWHI) system, where one antenna operates as a transmitter and the remaining eight antennas operate as receivers. In this research, fabricated tissue-mimicking head phantom with a benign and malignant tumor as brain abnormalities, which is utilized in MWHI system. Afterwards, the M-DMAS (modified-delay-multiply-and-sum) imaging algorithm is applied on the post-processed scattering parameters to reconstruct head regions image with 640×640 pixels. Three hundred sample images are collected, including benign and malignant tumors from various locations in head regions by the MWHI system. Later, the images are preprocessed and augmented to create a final image dataset containing 3600 images, and then used for training, validation, and testing the YOLOv5 model. Subsequently, 80% of images are utilized for training, and 20% are used for testing the model. Then from the 80% training dataset, 20% is utilized for validation to avoid overfitting. The brain abnormalities classification and detection performances with various datasets are investigated by the YOLOv5s, YOLOv5m, and YOLOv5l models of YOLOv5. It is investigated that the YOLOv5l model showed the best result for abnormalities classification and detection compared to other models. However, the achieved training accuracy, validation loss, precision, recall, F1-score, training and validation classification loss, and mean average precision (mAP) are 99.84%, 9.38%, 93.20%, 94.80%, 94.01%, 0.004, 0.0133, and 96.20% respectively for the YOLOv5l model, which ensures the better classification and detection accuracy of the model. Finally, a testing dataset with different scenarios is evaluated through the three versions of the YOLOv5 model, and conclude that brain abnormalities classification and detection with location are successfully classified and detected. Thus, the deep model is applicable in the portable MWHI system.


2021 ◽  
Author(s):  
Debmitra Ghosh

Abstract SARS-CoV-2 or severe acute respiratory syndrome coronavirus 2 is considered to be the cause of Coronavirus (COVID-19) which is a viral disease. The rapid spread of COVID-19 is having a detrimental effect on the global economy and health. A chest X-ray of infected patients can be considered as a crucial step in the battle against COVID-19. On retrospections, it is found that abnormalities exist in chest X-rays of patients suggestive of COVID-19. This sparked the introduction of a variety of deep learning systems and studies which have shown that the accuracy of COVID-19 patient detection through the use of chest X-rays is strongly optimistic. Although there are certain shortcomings like deep learning networks like convolutional neural networks (CNNs) need a substantial amount of training data but the outbreak is recent, so it is large datasets of radiographic images of the COVID-19 infected patients are not available in such a short time. Here, in this research, we present a method to generate synthetic chest X-ray (CXR) images by developing a Deep Convolution Generative Adversarial Network-based model. In addition, we demonstrate that the synthetic images produced from DCGAN can be utilized to enhance the performance of CNN for COVID-19 detection. Classification using CNN alone yielded 85% accuracy. Although there are several models available, we chose MobileNet as it is a lightweight deep neural network, with fewer parameters and higher classification accuracy. Here we are using a deep neural network-based model to diagnose COVID-19 infected patients through radiological imaging of 5,859 Chest X-Ray images. We are using a Deep Convolutional Neural Network and a pre-trained model “DenseNet 121” for two new label classes (COVID-19 and Normal). To improve the classification accuracy, in our work we have further reduced the number of network parameters by introducing dense blocks that are proposed in DenseNets into MobileNet. By adding synthetic images produced by DCGAN, the accuracy increased to 97%. Our goal is to use this method to speed up COVID-19 detection and lead to more robust systems of radiology.


Diagnostics ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 1972
Author(s):  
Abul Bashar ◽  
Ghazanfar Latif ◽  
Ghassen Ben Brahim ◽  
Nazeeruddin Mohammad ◽  
Jaafar Alghazo

It became apparent that mankind has to learn to live with and adapt to COVID-19, especially because the developed vaccines thus far do not prevent the infection but rather just reduce the severity of the symptoms. The manual classification and diagnosis of COVID-19 pneumonia requires specialized personnel and is time consuming and very costly. On the other hand, automatic diagnosis would allow for real-time diagnosis without human intervention resulting in reduced costs. Therefore, the objective of this research is to propose a novel optimized Deep Learning (DL) approach for the automatic classification and diagnosis of COVID-19 pneumonia using X-ray images. For this purpose, a publicly available dataset of chest X-rays on Kaggle was used in this study. The dataset was developed over three stages in a quest to have a unified COVID-19 entities dataset available for researchers. The dataset consists of 21,165 anterior-to-posterior and posterior-to-anterior chest X-ray images classified as: Normal (48%), COVID-19 (17%), Lung Opacity (28%) and Viral Pneumonia (6%). Data Augmentation was also applied to increase the dataset size to enhance the reliability of results by preventing overfitting. An optimized DL approach is implemented in which chest X-ray images go through a three-stage process. Image Enhancement is performed in the first stage, followed by Data Augmentation stage and in the final stage the results are fed to the Transfer Learning algorithms (AlexNet, GoogleNet, VGG16, VGG19, and DenseNet) where the images are classified and diagnosed. Extensive experiments were performed under various scenarios, which led to achieving the highest classification accuracy of 95.63% through the application of VGG16 transfer learning algorithm on the augmented enhanced dataset with freeze weights. This accuracy was found to be better as compared to the results reported by other methods in the recent literature. Thus, the proposed approach proved superior in performance as compared with that of other similar approaches in the extant literature, and it made a valuable contribution to the body of knowledge. Although the results achieved so far are promising, further work is planned to correlate the results of the proposed approach with clinical observations to further enhance the efficiency and accuracy of COVID-19 diagnosis.


2020 ◽  
Author(s):  
Jinseok Lee

BACKGROUND The coronavirus disease (COVID-19) has explosively spread worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) can be used as a relevant screening tool owing to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely busy fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE We aimed to quickly develop an AI technique to diagnose COVID-19 pneumonia and differentiate it from non-COVID pneumonia and non-pneumonia diseases on CT. METHODS A simple 2D deep learning framework, named fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning, using one of the four state-of-art pre-trained deep learning models (VGG16, ResNet50, InceptionV3, or Xception) as a backbone. For training and testing of FCONet, we collected 3,993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and non-pneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training and a testing set at a ratio of 8:2. For the test dataset, the diagnostic performance to diagnose COVID-19 pneumonia was compared among the four pre-trained FCONet models. In addition, we tested the FCONet models on an additional external testing dataset extracted from the embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS Of the four pre-trained models of FCONet, the ResNet50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100%, and accuracy 99.87%) and outperformed the other three pre-trained models in testing dataset. In additional external test dataset using low-quality CT images, the detection accuracy of the ResNet50 model was the highest (96.97%), followed by Xception, InceptionV3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS The FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing dataset, the ResNet50-based FCONet might be the best model, as it outperformed other FCONet models based on VGG16, Xception, and InceptionV3.


Polymers ◽  
2021 ◽  
Vol 13 (13) ◽  
pp. 2212
Author(s):  
Worawat Poltabtim ◽  
Ekachai Wimolmala ◽  
Teerasak Markpin ◽  
Narongrit Sombatsompop ◽  
Vichai Rosarpitak ◽  
...  

The potential utilization of wood/polyvinyl chloride (WPVC) composites containing an X-ray protective filler, namely bismuth oxide (Bi2O3) particles, was investigated as novel, safe, and environmentally friendly X-ray shielding materials. The wood and Bi2O3 contents used in this work varied from 20 to 40 parts per hundred parts of PVC by weight (pph) and from 0 to 25, 50, 75, and 100 pph, respectively. The study considered X-ray shielding, mechanical, density, water absorption, and morphological properties. The results showed that the overall X-ray shielding parameters, namely the linear attenuation coefficient (µ), mass attenuation coefficient (µm), and lead equivalent thickness (Pbeq), of the WPVC composites increased with increasing Bi2O3 contents but slightly decreased at higher wood contents (40 pph). Furthermore, comparative Pbeq values between the wood/PVC composites and similar commercial X-ray shielding boards indicated that the recommended Bi2O3 contents for the 20 pph (40 ph) wood/PVC composites were 35, 85, and 40 pph (40, 100, and 45 pph) for the attenuation of 60, 100, and 150-kV X-rays, respectively. In addition, the increased Bi2O3 contents in the WPVC composites enhanced the Izod impact strength, hardness (Shore D), and density, but reduced water absorption. On the other hand, the increased wood contents increased the impact strength, hardness (Shore D), and water absorption but lowered the density of the composites. The overall results suggested that the developed WPVC composites had great potential to be used as effective X-ray shielding materials with Bi2O3 acting as a suitable X-ray protective filler.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Malte Seemann ◽  
Lennart Bargsten ◽  
Alexander Schlaefer

AbstractDeep learning methods produce promising results when applied to a wide range of medical imaging tasks, including segmentation of artery lumen in computed tomography angiography (CTA) data. However, to perform sufficiently, neural networks have to be trained on large amounts of high quality annotated data. In the realm of medical imaging, annotations are not only quite scarce but also often not entirely reliable. To tackle both challenges, we developed a two-step approach for generating realistic synthetic CTA data for the purpose of data augmentation. In the first step moderately realistic images are generated in a purely numerical fashion. In the second step these images are improved by applying neural domain adaptation. We evaluated the impact of synthetic data on lumen segmentation via convolutional neural networks (CNNs) by comparing resulting performances. Improvements of up to 5% in terms of Dice coefficient and 20% for Hausdorff distance represent a proof of concept that the proposed augmentation procedure can be used to enhance deep learning-based segmentation for artery lumen in CTA images.


Sign in / Sign up

Export Citation Format

Share Document