scholarly journals A deep learning framework to detect Covid-19 disease via chest X-ray and CT scan images

Author(s):  
Mohammed Y. Kamil

COVID-19 disease has rapidly spread all over the world at the beginning of this year. The hospitals' reports have told that low sensitivity of RT-PCR tests in the infection early stage. At which point, a rapid and accurate diagnostic technique, is needed to detect the Covid-19. CT has been demonstrated to be a successful tool in the diagnosis of disease. A deep learning framework can be developed to aid in evaluating CT exams to provide diagnosis, thus saving time for disease control. In this work, a deep learning model was modified to Covid-19 detection via features extraction from chest X-ray and CT images. Initially, many transfer-learning models have applied and comparison it, then a VGG-19 model was tuned to get the best results that can be adopted in the disease diagnosis. Diagnostic performance was assessed for all models used via the dataset that included 1000 images. The VGG-19 model achieved the highest accuracy of 99%, sensitivity of 97.4%, and specificity of 99.4%. The deep learning and image processing demonstrated high performance in early Covid-19 detection. It shows to be an auxiliary detection way for clinical doctors and thus contribute to the control of the pandemic.

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 669
Author(s):  
Irfan Ullah Khan ◽  
Nida Aslam ◽  
Talha Anwar ◽  
Hind S. Alsaif ◽  
Sara Mhd. Bachar Chrouf ◽  
...  

The coronavirus pandemic (COVID-19) is disrupting the entire world; its rapid global spread threatens to affect millions of people. Accurate and timely diagnosis of COVID-19 is essential to control the spread and alleviate risk. Due to the promising results achieved by integrating machine learning (ML), particularly deep learning (DL), in automating the multiple disease diagnosis process. In the current study, a model based on deep learning was proposed for the automated diagnosis of COVID-19 using chest X-ray images (CXR) and clinical data of the patient. The aim of this study is to investigate the effects of integrating clinical patient data with the CXR for automated COVID-19 diagnosis. The proposed model used data collected from King Fahad University Hospital, Dammam, KSA, which consists of 270 patient records. The experiments were carried out first with clinical data, second with the CXR, and finally with clinical data and CXR. The fusion technique was used to combine the clinical features and features extracted from images. The study found that integrating clinical data with the CXR improves diagnostic accuracy. Using the clinical data and the CXR, the model achieved an accuracy of 0.970, a recall of 0.986, a precision of 0.978, and an F-score of 0.982. Further validation was performed by comparing the performance of the proposed system with the diagnosis of an expert. Additionally, the results have shown that the proposed system can be used as a tool that can help the doctors in COVID-19 diagnosis.


2021 ◽  
Vol 2071 (1) ◽  
pp. 012002
Author(s):  
K Sato ◽  
N Kanno ◽  
T Ishii ◽  
Y Saijo

Abstract Detecting lung tumors in early stage by reading chest X-ray images is important for radical treatments of the disease. In order to decrease the risk of missed lung tumors, diagnosis support systems that can provide the accurate detection of lung tumors are in high demand, and the use of artificial intelligence with deep learning is one of the promising solutions. In our research, we aim to improve the accuracy of a deep learning-based system for detecting lung tumors by developing a bone suppression algorithm as a preprocessing for the machine-learning model. Our bone suppression algorithm was devised for conventional single-shot chest X-ray images, which do not rely on a specific type of imaging systems. 604 chest X-ray images were processed using the proposed algorithm and evaluated by combining it with a U-net deep learning model. The results showed that the bone suppression algorithm successfully improved the performance of the deep learning model to identify the location of lung tumors (Intersection over Union) from 0.085 (without the bone suppression algorithm) to 0.142, as well as the ability to classify the lung cancer (Area under Curve) that increased from 0.700 to 0.736. The bone suppression algorithm would be useful to improve the accuracy and the reliability of the deep learning-based diagnosis support systems for detecting lung cancer in mass medical examinations.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Makoto Nishimori ◽  
Kunihiko Kiuchi ◽  
Kunihiro Nishimura ◽  
Kengo Kusano ◽  
Akihiro Yoshida ◽  
...  

AbstractCardiac accessory pathways (APs) in Wolff–Parkinson–White (WPW) syndrome are conventionally diagnosed with decision tree algorithms; however, there are problems with clinical usage. We assessed the efficacy of the artificial intelligence model using electrocardiography (ECG) and chest X-rays to identify the location of APs. We retrospectively used ECG and chest X-rays to analyse 206 patients with WPW syndrome. Each AP location was defined by an electrophysiological study and divided into four classifications. We developed a deep learning model to classify AP locations and compared the accuracy with that of conventional algorithms. Moreover, 1519 chest X-ray samples from other datasets were used for prior learning, and the combined chest X-ray image and ECG data were put into the previous model to evaluate whether the accuracy improved. The convolutional neural network (CNN) model using ECG data was significantly more accurate than the conventional tree algorithm. In the multimodal model, which implemented input from the combined ECG and chest X-ray data, the accuracy was significantly improved. Deep learning with a combination of ECG and chest X-ray data could effectively identify the AP location, which may be a novel deep learning model for a multimodal model.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Isabella Castiglioni ◽  
Davide Ippolito ◽  
Matteo Interlenghi ◽  
Caterina Beatrice Monti ◽  
Christian Salvatore ◽  
...  

Abstract Background We aimed to train and test a deep learning classifier to support the diagnosis of coronavirus disease 2019 (COVID-19) using chest x-ray (CXR) on a cohort of subjects from two hospitals in Lombardy, Italy. Methods We used for training and validation an ensemble of ten convolutional neural networks (CNNs) with mainly bedside CXRs of 250 COVID-19 and 250 non-COVID-19 subjects from two hospitals (Centres 1 and 2). We then tested such system on bedside CXRs of an independent group of 110 patients (74 COVID-19, 36 non-COVID-19) from one of the two hospitals. A retrospective reading was performed by two radiologists in the absence of any clinical information, with the aim to differentiate COVID-19 from non-COVID-19 patients. Real-time polymerase chain reaction served as the reference standard. Results At 10-fold cross-validation, our deep learning model classified COVID-19 and non-COVID-19 patients with 0.78 sensitivity (95% confidence interval [CI] 0.74–0.81), 0.82 specificity (95% CI 0.78–0.85), and 0.89 area under the curve (AUC) (95% CI 0.86–0.91). For the independent dataset, deep learning showed 0.80 sensitivity (95% CI 0.72–0.86) (59/74), 0.81 specificity (29/36) (95% CI 0.73–0.87), and 0.81 AUC (95% CI 0.73–0.87). Radiologists’ reading obtained 0.63 sensitivity (95% CI 0.52–0.74) and 0.78 specificity (95% CI 0.61–0.90) in Centre 1 and 0.64 sensitivity (95% CI 0.52–0.74) and 0.86 specificity (95% CI 0.71–0.95) in Centre 2. Conclusions This preliminary experience based on ten CNNs trained on a limited training dataset shows an interesting potential of deep learning for COVID-19 diagnosis. Such tool is in training with new CXRs to further increase its performance.


Mathematics ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 1002
Author(s):  
Mohammad Khishe ◽  
Fabio Caraffini ◽  
Stefan Kuhn

This article proposes a framework that automatically designs classifiers for the early detection of COVID-19 from chest X-ray images. To do this, our approach repeatedly makes use of a heuristic for optimisation to efficiently find the best combination of the hyperparameters of a convolutional deep learning model. The framework starts with optimising a basic convolutional neural network which represents the starting point for the evolution process. Subsequently, at most two additional convolutional layers are added, at a time, to the previous convolutional structure as a result of a further optimisation phase. Each performed phase maximises the the accuracy of the system, thus requiring training and assessment of the new model, which gets gradually deeper, with relevant COVID-19 chest X-ray images. This iterative process ends when no improvement, in terms of accuracy, is recorded. Hence, the proposed method evolves the most performing network with the minimum number of convolutional layers. In this light, we simultaneously achieve high accuracy while minimising the presence of redundant layers to guarantee a fast but reliable model. Our results show that the proposed implementation of such a framework achieves accuracy up to 99.11%, thus being particularly suitable for the early detection of COVID-19.


Symmetry ◽  
2020 ◽  
Vol 12 (7) ◽  
pp. 1146 ◽  
Author(s):  
Ahmed T. Sahlol ◽  
Mohamed Abd Elaziz ◽  
Amani Tariq Jamal ◽  
Robertas Damaševičius ◽  
Osama Farouk Hassan

Tuberculosis (TB) is is an infectious disease that generally attacks the lungs and causes death for millions of people annually. Chest radiography and deep-learning-based image segmentation techniques can be utilized for TB diagnostics. Convolutional Neural Networks (CNNs) has shown advantages in medical image recognition applications as powerful models to extract informative features from images. Here, we present a novel hybrid method for efficient classification of chest X-ray images. First, the features are extracted from chest X-ray images using MobileNet, a CNN model, which was previously trained on the ImageNet dataset. Then, to determine which of these features are the most relevant, we apply the Artificial Ecosystem-based Optimization (AEO) algorithm as a feature selector. The proposed method is applied to two public benchmark datasets (Shenzhen and Dataset 2) and allows them to achieve high performance and reduced computational time. It selected successfully only the best 25 and 19 (for Shenzhen and Dataset 2, respectively) features out of about 50,000 features extracted with MobileNet, while improving the classification accuracy (90.2% for Shenzen dataset and 94.1% for Dataset 2). The proposed approach outperforms other deep learning methods, while the results are the best compared to other recently published works on both datasets.


Sign in / Sign up

Export Citation Format

Share Document