scholarly journals COVID-19 Deep Learning Prediction Model Using Publicly Available Radiologist-Adjudicated Chest X-Ray Images as Training Data: Preliminary Findings

2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Mohd Zulfaezal Che Azemin ◽  
Radhiana Hassan ◽  
Mohd Izzuddin Mohd Tamrin ◽  
Mohd Adli Md Ali

The key component in deep learning research is the availability of training data sets. With a limited number of publicly available COVID-19 chest X-ray images, the generalization and robustness of deep learning models to detect COVID-19 cases developed based on these images are questionable. We aimed to use thousands of readily available chest radiograph images with clinical findings associated with COVID-19 as a training data set, mutually exclusive from the images with confirmed COVID-19 cases, which will be used as the testing data set. We used a deep learning model based on the ResNet-101 convolutional neural network architecture, which was pretrained to recognize objects from a million of images and then retrained to detect abnormality in chest X-ray images. The performance of the model in terms of area under the receiver operating curve, sensitivity, specificity, and accuracy was 0.82, 77.3%, 71.8%, and 71.9%, respectively. The strength of this study lies in the use of labels that have a strong clinical association with COVID-19 cases and the use of mutually exclusive publicly available data for training, validation, and testing.

2020 ◽  
Author(s):  
Huseyin Yaşar ◽  
Murat Ceylan

Abstract At the end of 2019, a new type of virus, belonging to the coronaviridae family has emerged and it is considered that the virus in question is of zootonic origin. The virus that emerged in China first affected this country and then spread worldwide. Pneumonia develops due to Covid-19 virus in patients having severe disease symptoms. Many literature studies have been carried out in the process where the effects of the disease-induced pneumonia in lungs have been demonstrated with the help of chest X-ray imaging. In this study, which aims at early diagnosis of Covid-19 disease by using X-Ray images, the deep-learning approach, which is a state-of-the-art artificial intelligence method, was used and automatic classification of images was performed using Convolutional Neural Networks (CNN). In the first training-test data set used in the study, there were a total of 230 abnormal and 80 normal X-Ray images, while in the second training-test data set there were 476 X-Ray images, of which 150 abnormal and 326 normal. Thus, classification results have been provided for two data sets, containing predominantly abnormal images and predominantly normal images respectively. In the study, a 23-layer CNN architecture was developed. Within the scope of the study, results were obtained by using chest X-Ray images directly in training-test procedures and the sub-band images obtained by applying Dual Tree Complex Wavelet Transform (DT-CWT) to the above-mentioned images. The same experiments were repeated using images obtained by applying Local Binary Pattern (LBP) to the chest X-Ray images. Within the scope of the study, a new result generation algorithm having been put forward additionally, it was ensured that the experimental results were combined and the success of the study was improved. In the experiments carried out in the study, the trainings were carried out using the k-fold cross validation method. Here the k value was chosen 23. Considering the highest results of the tests performed in the study, values of sensitivity, specificity, accuracy and AUC for the first training-test data set were calculated to be 1, 1, 0,9913 and 0,9996; while for the second data set of training-test, they were 1, 0,9969, 0,9958 and 0,9996 respectively. Considering the average highest results of the experiments performed within the scope of the study, the values of sensitivity, specificity, accuracy and AUC for the first training-test data set were 0,9933, 0,9725, 0,9843 and 0,9988; while for the second training-test data set, they were 0,9813, 0,9908, 0,9857 and 0,9983 respectively.


Author(s):  
Sanhita Basu ◽  
Sushmita Mitra ◽  
Nilanjan Saha

AbstractWith the ever increasing demand for screening millions of prospective “novel coronavirus” or COVID-19 cases, and due to the emergence of high false negatives in the commonly used PCR tests, the necessity for probing an alternative simple screening mechanism of COVID-19 using radiological images (like chest X-Rays) assumes importance. In this scenario, machine learning (ML) and deep learning (DL) offer fast, automated, effective strategies to detect abnormalities and extract key features of the altered lung parenchyma, which may be related to specific signatures of the COVID-19 virus. However, the available COVID-19 datasets are inadequate to train deep neural networks. Therefore, we propose a new concept called domain extension transfer learning (DETL). We employ DETL, with pre-trained deep convolutional neural network, on a related large chest X-Ray dataset that is tuned for classifying between four classes viz. normal, other_disease, pneumonia and Covid — 19. A 5-fold cross validation is performed to estimate the feasibility of using chest X-Rays to diagnose COVID-19. The initial results show promise, with the possibility of replication on bigger and more diverse data sets. The overall accuracy was measured as 95.3% ± 0.02. In order to get an idea about the COVID-19 detection transparency, we employed the concept of Gradient Class Activation Map (Grad-CAM) for detecting the regions where the model paid more attention during the classification. This was found to strongly correlate with clinical findings, as validated by experts.


Author(s):  
Debaditya Shome ◽  
T. Kar ◽  
Sachi Nandan Mohanty ◽  
Prayag Tiwari ◽  
Khan Muhammad ◽  
...  

In the recent pandemic, accurate and rapid testing of patients remained a critical task in the diagnosis and control of COVID-19 disease spread in the healthcare industry. Because of the sudden increase in cases, most countries have faced scarcity and a low rate of testing. Chest X-rays have been shown in the literature to be a potential source of testing for COVID-19 patients, but manually checking X-ray reports is time-consuming and error-prone. Considering these limitations and the advancements in data science, we proposed a Vision Transformer-based deep learning pipeline for COVID-19 detection from chest X-ray-based imaging. Due to the lack of large data sets, we collected data from three open-source data sets of chest X-ray images and aggregated them to form a 30 K image data set, which is the largest publicly available collection of chest X-ray images in this domain to our knowledge. Our proposed transformer model effectively differentiates COVID-19 from normal chest X-rays with an accuracy of 98% along with an AUC score of 99% in the binary classification task. It distinguishes COVID-19, normal, and pneumonia patient’s X-rays with an accuracy of 92% and AUC score of 98% in the Multi-class classification task. For evaluation on our data set, we fine-tuned some of the widely used models in literature, namely, EfficientNetB0, InceptionV3, Resnet50, MobileNetV3, Xception, and DenseNet-121, as baselines. Our proposed transformer model outperformed them in terms of all metrics. In addition, a Grad-CAM based visualization is created which makes our approach interpretable by radiologists and can be used to monitor the progression of the disease in the affected lungs, assisting healthcare.


2020 ◽  
Author(s):  
Mundher Taresh ◽  
Ningbo Zhu ◽  
Talal Ahmed Ali Ali

AbstractNovel coronavirus pneumonia (COVID-19) is a contagious disease that has already caused thousands of deaths and infected millions of people worldwide. Thus, all technological gadgets that allow the fast detection of COVID-19 infection with high accuracy can offer help to healthcare professionals. This study is purposed to explore the effectiveness of artificial intelligence (AI) in the rapid and reliable detection of COVID-19 based on chest X-ray imaging. In this study, reliable pre-trained deep learning algorithms were applied to achieve the automatic detection of COVID-19-induced pneumonia from digital chest X-ray images.Moreover, the study aims to evaluate the performance of advanced neural architectures proposed for the classification of medical images over recent years. The data set used in the experiments involves 274 COVID-19 cases, 380 viral pneumonia, and 380 healthy cases, which was collected from the available X-ray images on public medical repositories. The confusion matrix provided a basis for testing the post-classification model. Furthermore, an open-source library PyCM* was used to support the statistical parameters. The study revealed the superiority of Model VGG16 over other models applied to conduct this research where the model performed best in terms of overall scores and based-class scores. According to the research results, deep learning with X-ray imaging is useful in the collection of critical biological markers associated with COVID-19 infection. The technique is conducive for the physicians to make a diagnosis of COVID-19 infection. Meanwhile, the high accuracy of this computer-aided diagnostic tool can significantly improve the speed and accuracy of COVID-19 diagnosis.


Author(s):  
Debaraj Rana ◽  
Swarna Prabha Jena ◽  
Subrat Kumar Pradhan

The Global pandemic declared Corona Virus Disease (COVID 19) has affected severely tothe health of human being over the globe. More than 15 crore around worldwide have been affected by the Novel Corona virus and it is progressing rapidly. Mainly in the health sector, the hospitals are not properly equipped with proper diagnosis system which can detect the disease accurately with less time consumption. The Chest X Ray image are taking less time and cost effective which can be used for detection of COVID 19 even the severity can also be determine form the CXR images. In the current research many researchers are focusing on implementation of Deep learning method for accurate and quick detection of COVID 19 which can help the radiologist for evaluation of the disease. In this review, proposed Deep learning methodology from the literature have been discussed with their experimental data set. This review could help to develop modified architecture which gives more improvement in the diagnosis in term of computational complexity and time consumption


2020 ◽  
pp. 666-679 ◽  
Author(s):  
Xuhong Zhang ◽  
Toby C. Cornish ◽  
Lin Yang ◽  
Tellen D. Bennett ◽  
Debashis Ghosh ◽  
...  

PURPOSE We focus on the problem of scarcity of annotated training data for nucleus recognition in Ki-67 immunohistochemistry (IHC)–stained pancreatic neuroendocrine tumor (NET) images. We hypothesize that deep learning–based domain adaptation is helpful for nucleus recognition when image annotations are unavailable in target data sets. METHODS We considered 2 different institutional pancreatic NET data sets: one (ie, source) containing 38 cases with 114 annotated images and the other (ie, target) containing 72 cases with 20 annotated images. The gold standards were manually annotated by 1 pathologist. We developed a novel deep learning–based domain adaptation framework to count different types of nuclei (ie, immunopositive tumor, immunonegative tumor, nontumor nuclei). We compared the proposed method with several recent fully supervised deep learning models, such as fully convolutional network-8s (FCN-8s), U-Net, fully convolutional regression network (FCRN) A, FCRNB, and fully residual convolutional network (FRCN). We also evaluated the proposed method by learning with a mixture of converted source images and real target annotations. RESULTS Our method achieved an F1 score of 81.3% and 62.3% for nucleus detection and classification in the target data set, respectively. Our method outperformed FCN-8s (53.6% and 43.6% for nucleus detection and classification, respectively), U-Net (61.1% and 47.6%), FCRNA (63.4% and 55.8%), and FCRNB (68.2% and 60.6%) in terms of F1 score and was competitive with FRCN (81.7% and 70.7%). In addition, learning with a mixture of converted source images and only a small set of real target labels could further boost the performance. CONCLUSION This study demonstrates that deep learning–based domain adaptation is helpful for nucleus recognition in Ki-67 IHC stained images when target data annotations are not available. It would improve the applicability of deep learning models designed for downstream supervised learning tasks on different data sets.


2021 ◽  
Vol 10 (2) ◽  
pp. 254
Author(s):  
Che-Yu Su ◽  
Tsung-Yu Tsai ◽  
Cheng-Yen Tseng ◽  
Keng-Hao Liu ◽  
Chi-Wei Lee

Hollow organ perforation can precipitate a life-threatening emergency due to peritonitis followed by fulminant sepsis and fatal circulatory collapse. Pneumoperitoneum is typically detected as subphrenic free air on frontal chest X-ray images; however, treatment is reliant on accurate interpretation of radiographs in a timely manner. Unfortunately, it is not uncommon to have misdiagnoses made by emergency physicians who have insufficient experience or who are too busy and overloaded by multitasking. It is essential to develop an automated method for reviewing frontal chest X-ray images to alert emergency physicians in a timely manner about the life-threatening condition of hollow organ perforation that mandates an immediate second look. In this study, a deep learning-based approach making use of convolutional neural networks for the detection of subphrenic free air is proposed. A total of 667 chest X-ray images were collected at a local hospital, where 587 images (positive/negative: 267/400) were used for training and 80 images (40/40) for testing. This method achieved 0.875, 0.825, and 0.889 in sensitivity, specificity, and AUC score, respectively. It may provide a sensitive adjunctive screening tool to detect pneumoperitoneum on images read by emergency physicians who have insufficient clinical experience or who are too busy and overloaded by multitasking.


Author(s):  
Jonathan Stubblefield ◽  
Mitchell Hervert ◽  
Jason Causey ◽  
Jake Qualls ◽  
Wei Dong ◽  
...  

AbstractOne of the challenges with urgent evaluation of patients with acute respiratory distress syndrome (ARDS) in the emergency room (ER) is distinguishing between cardiac vs infectious etiologies for their pulmonary findings. We evaluated ER patient classification for cardiac and infection causes with clinical data and chest X-ray image data. We show that a deep-learning model trained with an external image data set can be used to extract image features and improve the classification accuracy of a data set that does not contain enough image data to train a deep-learning model. We also conducted clinical feature importance analysis and identified the most important clinical features for ER patient classification. This model can be upgraded to include a SARS-CoV-2 specific classification with COVID-19 patients data. The current model is publicly available with an interface at the web link: http://nbttranslationalresearch.org/.Data statementThe clinical data and chest x-ray image data for this study were collected and prepared by the residents and researchers of the Joint Translational Research Lab of Arkansas State University (A-State) and St. Bernards Medical Center (SBMC) Internal Medicine Residency Program. As data collection is on-going for the project stage-II of clinical testing, raw data is not currently available for data sharing to the public.EthicsThis study was approved by the St. Bernards Medical Center’s Institutional Review Board (IRB).


Author(s):  
Ishtiaque Ahmed ◽  
◽  
Manan Darda ◽  
Neha Tikyani ◽  
Rachit Agrawal ◽  
...  

The COVID-19 pandemic has caused large-scale outbreaks in more than 150 countries worldwide, causing massive damage to the livelihood of many people. The capacity to identify contaminated patients early and get unique treatment is quite possibly the primary stride in the battle against COVID-19. One of the quickest ways to diagnose patients is to use radiography and radiology images to detect the disease. Early studies have shown that chest X-rays of patients infected with COVID-19 have unique abnormalities. To identify COVID-19 patients from chest X-ray images, we used various deep learning models based on previous studies. We first compiled a data set of 2,815 chest radiographs from public sources. The model produces reliable and stable results with an accuracy of 91.6%, a Positive Predictive Value of 80%, a Negative Predictive Value of 100%, specificity of 87.50%, and Sensitivity of 100%. It is observed that the CNN-based architecture can diagnose COVID19 disease. The parameters’ outcomes can be further improved by increasing the dataset size and by developing the CNN-based architecture for training the model.


Author(s):  
Meenakshi Srivastava

IoT-based communication between medical devices has encouraged the healthcare industry to use automated systems which provide effective insight from the massive amount of gathered data. AI and machine learning have played a major role in the design of such systems. Accuracy and validation are considered, since copious training data is required in a neural network (NN)-based deep learning model. This is hardly feasible in medical research, because the size of data sets is constrained by complexity and high cost experiments. The availability of limited sample data validation of NN remains a concern. The prediction of outcomes on a NN trained on a smaller data set cannot guarantee performance and exhibits unstable behaviors. Surrogate data-based validation of NN can be viewed as a solution. In the current chapter, the classification of breast tissue data by a NN model has been detailed. In the absence of a huge data set, a surrogate data-based validation approach has been applied. The discussed study can be applied for predictive modelling for applications described by small data sets.


Sign in / Sign up

Export Citation Format

Share Document