scholarly journals X-Ray Image based COVID-19 Detection using Pre-trained Deep Learning Models

Author(s):  
Michael J Horry ◽  
Subrata Chakraborty ◽  
Manoranjan Paul ◽  
Anwaar Ulhaq ◽  
Biswajeet Pradhan ◽  
...  

Detecting COVID-19 early may help in devising an appropriate treatment plan and disease containment decisions. In this study, we demonstrate how pre-trained deep learning models can be adopted to perform COVID-19 detection using X-Ray images. The aim is to provide over-stressed medical professionals a second pair of eyes through intelligent image classification models. We highlight the challenges (including dataset size and quality) in utilising current publicly available COVID-19 datasets for developing useful deep learning models. We propose a semi-automated image pre-processing model to create a trustworthy image dataset for developing and testing deep learning models. The new approach is aimed to reduce unwanted noise from X-Ray images so that deep learning models can focus on detecting diseases with specific features from them. Next, we devise a deep learning experimental framework, where we utilise the processed dataset to perform comparative testing for several popular and widely available deep learning model families such as VGG, Inception, Xception, and Resnet. The experimental results highlight the suitability of these models for current available dataset and indicates that models with simpler networks such as VGG19 performs relatively better with up to 83% precision. This will provide a solid pathway for researchers and practitioners to develop improved models in the future.

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Manjit Kaur ◽  
Vijay Kumar ◽  
Vaishali Yadav ◽  
Dilbag Singh ◽  
Naresh Kumar ◽  
...  

COVID-19 has affected the whole world drastically. A huge number of people have lost their lives due to this pandemic. Early detection of COVID-19 infection is helpful for treatment and quarantine. Therefore, many researchers have designed a deep learning model for the early diagnosis of COVID-19-infected patients. However, deep learning models suffer from overfitting and hyperparameter-tuning issues. To overcome these issues, in this paper, a metaheuristic-based deep COVID-19 screening model is proposed for X-ray images. The modified AlexNet architecture is used for feature extraction and classification of the input images. Strength Pareto evolutionary algorithm-II (SPEA-II) is used to tune the hyperparameters of modified AlexNet. The proposed model is tested on a four-class (i.e., COVID-19, tuberculosis, pneumonia, or healthy) dataset. Finally, the comparisons are drawn among the existing and the proposed models.


Author(s):  
Ishtiaque Ahmed ◽  
◽  
Manan Darda ◽  
Neha Tikyani ◽  
Rachit Agrawal ◽  
...  

The COVID-19 pandemic has caused large-scale outbreaks in more than 150 countries worldwide, causing massive damage to the livelihood of many people. The capacity to identify contaminated patients early and get unique treatment is quite possibly the primary stride in the battle against COVID-19. One of the quickest ways to diagnose patients is to use radiography and radiology images to detect the disease. Early studies have shown that chest X-rays of patients infected with COVID-19 have unique abnormalities. To identify COVID-19 patients from chest X-ray images, we used various deep learning models based on previous studies. We first compiled a data set of 2,815 chest radiographs from public sources. The model produces reliable and stable results with an accuracy of 91.6%, a Positive Predictive Value of 80%, a Negative Predictive Value of 100%, specificity of 87.50%, and Sensitivity of 100%. It is observed that the CNN-based architecture can diagnose COVID19 disease. The parameters’ outcomes can be further improved by increasing the dataset size and by developing the CNN-based architecture for training the model.


Author(s):  
Ishtiaque Ahmed ◽  
◽  
Manan Darda ◽  
Neha Tikyani ◽  
Rachit Agrawal ◽  
...  

The COVID-19 pandemic has caused large-scale outbreaks in more than 150 countries worldwide, causing massive damage to the livelihood of many people. The capacity to identify contaminated patients early and get unique treatment is quite possibly the primary stride in the battle against COVID-19. One of the quickest ways to diagnose patients is to use radiography and radiology images to detect the disease. Early studies have shown that chest X-rays of patients infected with COVID-19 have unique abnormalities. To identify COVID-19 patients from chest X-ray images, we used various deep learning models based on previous studies. We first compiled a data set of 2,815 chest radiographs from public sources. The model produces reliable and stable results with an accuracy of 91.6%, a Positive Predictive Value of 80%, a Negative Predictive Value of 100%, specificity of 87.50%, and Sensitivity of 100%. It is observed that the CNN-based architecture can diagnose COVID-19 disease. The parameters’ outcomes can be further improved by increasing the dataset size and by developing the CNN-based architecture for training the model.


2019 ◽  
Vol 9 (22) ◽  
pp. 4871 ◽  
Author(s):  
Quan Liu ◽  
Chen Feng ◽  
Zida Song ◽  
Joseph Louis ◽  
Jian Zhou

Earthmoving is an integral civil engineering operation of significance, and tracking its productivity requires the statistics of loads moved by dump trucks. Since current truck loads’ statistics methods are laborious, costly, and limited in application, this paper presents the framework of a novel, automated, non-contact field earthmoving quantity statistics (FEQS) for projects with large earthmoving demands that use uniform and uncovered trucks. The proposed FEQS framework utilizes field surveillance systems and adopts vision-based deep learning for full/empty-load truck classification as the core work. Since convolutional neural network (CNN) and its transfer learning (TL) forms are popular vision-based deep learning models and numerous in type, a comparison study is conducted to test the framework’s core work feasibility and evaluate the performance of different deep learning models in implementation. The comparison study involved 12 CNN or CNN-TL models in full/empty-load truck classification, and the results revealed that while several provided satisfactory performance, the VGG16-FineTune provided the optimal performance. This proved the core work feasibility of the proposed FEQS framework. Further discussion provides model choice suggestions that CNN-TL models are more feasible than CNN prototypes, and models that adopt different TL methods have advantages in either working accuracy or speed for different tasks.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Makoto Nishimori ◽  
Kunihiko Kiuchi ◽  
Kunihiro Nishimura ◽  
Kengo Kusano ◽  
Akihiro Yoshida ◽  
...  

AbstractCardiac accessory pathways (APs) in Wolff–Parkinson–White (WPW) syndrome are conventionally diagnosed with decision tree algorithms; however, there are problems with clinical usage. We assessed the efficacy of the artificial intelligence model using electrocardiography (ECG) and chest X-rays to identify the location of APs. We retrospectively used ECG and chest X-rays to analyse 206 patients with WPW syndrome. Each AP location was defined by an electrophysiological study and divided into four classifications. We developed a deep learning model to classify AP locations and compared the accuracy with that of conventional algorithms. Moreover, 1519 chest X-ray samples from other datasets were used for prior learning, and the combined chest X-ray image and ECG data were put into the previous model to evaluate whether the accuracy improved. The convolutional neural network (CNN) model using ECG data was significantly more accurate than the conventional tree algorithm. In the multimodal model, which implemented input from the combined ECG and chest X-ray data, the accuracy was significantly improved. Deep learning with a combination of ECG and chest X-ray data could effectively identify the AP location, which may be a novel deep learning model for a multimodal model.


2021 ◽  
Vol 11 (9) ◽  
pp. 4233
Author(s):  
Biprodip Pal ◽  
Debashis Gupta ◽  
Md. Rashed-Al-Mahfuz ◽  
Salem A. Alyami ◽  
Mohammad Ali Moni

The COVID-19 pandemic requires the rapid isolation of infected patients. Thus, high-sensitivity radiology images could be a key technique to diagnose patients besides the polymerase chain reaction approach. Deep learning algorithms are proposed in several studies to detect COVID-19 symptoms due to the success in chest radiography image classification, cost efficiency, lack of expert radiologists, and the need for faster processing in the pandemic area. Most of the promising algorithms proposed in different studies are based on pre-trained deep learning models. Such open-source models and lack of variation in the radiology image-capturing environment make the diagnosis system vulnerable to adversarial attacks such as fast gradient sign method (FGSM) attack. This study therefore explored the potential vulnerability of pre-trained convolutional neural network algorithms to the FGSM attack in terms of two frequently used models, VGG16 and Inception-v3. Firstly, we developed two transfer learning models for X-ray and CT image-based COVID-19 classification and analyzed the performance extensively in terms of accuracy, precision, recall, and AUC. Secondly, our study illustrates that misclassification can occur with a very minor perturbation magnitude, such as 0.009 and 0.003 for the FGSM attack in these models for X-ray and CT images, respectively, without any effect on the visual perceptibility of the perturbation. In addition, we demonstrated that successful FGSM attack can decrease the classification performance to 16.67% and 55.56% for X-ray images, as well as 36% and 40% in the case of CT images for VGG16 and Inception-v3, respectively, without any human-recognizable perturbation effects in the adversarial images. Finally, we analyzed that correct class probability of any test image which is supposed to be 1, can drop for both considered models and with increased perturbation; it can drop to 0.24 and 0.17 for the VGG16 model in cases of X-ray and CT images, respectively. Thus, despite the need for data sharing and automated diagnosis, practical deployment of such program requires more robustness.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Stefan Gerlach ◽  
Christoph Fürweger ◽  
Theresa Hofmann ◽  
Alexander Schlaefer

AbstractAlthough robotic radiosurgery offers a flexible arrangement of treatment beams, generating treatment plans is computationally challenging and a time consuming process for the planner. Furthermore, different clinical goals have to be considered during planning and generally different sets of beams correspond to different clinical goals. Typically, candidate beams sampled from a randomized heuristic form the basis for treatment planning. We propose a new approach to generate candidate beams based on deep learning using radiological features as well as the desired constraints. We demonstrate that candidate beams generated for specific clinical goals can improve treatment plan quality. Furthermore, we compare two approaches to include information about constraints in the prediction. Our results show that CNN generated beams can improve treatment plan quality for different clinical goals, increasing coverage from 91.2 to 96.8% for 3,000 candidate beams on average. When including the clinical goal in the training, coverage is improved by 1.1% points.


Author(s):  
Hsu-Heng Yen ◽  
Ping-Yu Wu ◽  
Pei-Yuan Su ◽  
Chia-Wei Yang ◽  
Yang-Yuan Chen ◽  
...  

Abstract Purpose Management of peptic ulcer bleeding is clinically challenging. Accurate characterization of the bleeding during endoscopy is key for endoscopic therapy. This study aimed to assess whether a deep learning model can aid in the classification of bleeding peptic ulcer disease. Methods Endoscopic still images of patients (n = 1694) with peptic ulcer bleeding for the last 5 years were retrieved and reviewed. Overall, 2289 images were collected for deep learning model training, and 449 images were validated for the performance test. Two expert endoscopists classified the images into different classes based on their appearance. Four deep learning models, including Mobile Net V2, VGG16, Inception V4, and ResNet50, were proposed and pre-trained by ImageNet with the established convolutional neural network algorithm. A comparison of the endoscopists and trained deep learning model was performed to evaluate the model’s performance on a dataset of 449 testing images. Results The results first presented the performance comparisons of four deep learning models. The Mobile Net V2 presented the optimal performance of the proposal models. The Mobile Net V2 was chosen for further comparing the performance with the diagnostic results obtained by one senior and one novice endoscopists. The sensitivity and specificity were acceptable for the prediction of “normal” lesions in both 3-class and 4-class classifications. For the 3-class category, the sensitivity and specificity were 94.83% and 92.36%, respectively. For the 4-class category, the sensitivity and specificity were 95.40% and 92.70%, respectively. The interobserver agreement of the testing dataset of the model was moderate to substantial with the senior endoscopist. The accuracy of the determination of endoscopic therapy required and high-risk endoscopic therapy of the deep learning model was higher than that of the novice endoscopist. Conclusions In this study, the deep learning model performed better than inexperienced endoscopists. Further improvement of the model may aid in clinical decision-making during clinical practice, especially for trainee endoscopist.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Isabella Castiglioni ◽  
Davide Ippolito ◽  
Matteo Interlenghi ◽  
Caterina Beatrice Monti ◽  
Christian Salvatore ◽  
...  

Abstract Background We aimed to train and test a deep learning classifier to support the diagnosis of coronavirus disease 2019 (COVID-19) using chest x-ray (CXR) on a cohort of subjects from two hospitals in Lombardy, Italy. Methods We used for training and validation an ensemble of ten convolutional neural networks (CNNs) with mainly bedside CXRs of 250 COVID-19 and 250 non-COVID-19 subjects from two hospitals (Centres 1 and 2). We then tested such system on bedside CXRs of an independent group of 110 patients (74 COVID-19, 36 non-COVID-19) from one of the two hospitals. A retrospective reading was performed by two radiologists in the absence of any clinical information, with the aim to differentiate COVID-19 from non-COVID-19 patients. Real-time polymerase chain reaction served as the reference standard. Results At 10-fold cross-validation, our deep learning model classified COVID-19 and non-COVID-19 patients with 0.78 sensitivity (95% confidence interval [CI] 0.74–0.81), 0.82 specificity (95% CI 0.78–0.85), and 0.89 area under the curve (AUC) (95% CI 0.86–0.91). For the independent dataset, deep learning showed 0.80 sensitivity (95% CI 0.72–0.86) (59/74), 0.81 specificity (29/36) (95% CI 0.73–0.87), and 0.81 AUC (95% CI 0.73–0.87). Radiologists’ reading obtained 0.63 sensitivity (95% CI 0.52–0.74) and 0.78 specificity (95% CI 0.61–0.90) in Centre 1 and 0.64 sensitivity (95% CI 0.52–0.74) and 0.86 specificity (95% CI 0.71–0.95) in Centre 2. Conclusions This preliminary experience based on ten CNNs trained on a limited training dataset shows an interesting potential of deep learning for COVID-19 diagnosis. Such tool is in training with new CXRs to further increase its performance.


Sign in / Sign up

Export Citation Format

Share Document