scholarly journals COVID 19, Pneumonia and Other Disease Classification using Chest X-Ray images

Author(s):  
Abhishek Panwar ◽  
Akshay Dagar ◽  
Vishal Pal ◽  
Vinod Kumar
2019 ◽  
Vol 75 ◽  
pp. 66-73 ◽  
Author(s):  
Han Liu ◽  
Lei Wang ◽  
Yandong Nan ◽  
Faguang Jin ◽  
Qi Wang ◽  
...  

2021 ◽  
Vol 192 ◽  
pp. 658-665
Author(s):  
Guy Caseneuve ◽  
Iren Valova ◽  
Nathan LeBlanc ◽  
Melanie Thibodeau

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Alexandros Karargyris ◽  
Satyananda Kashyap ◽  
Ismini Lourentzou ◽  
Joy T. Wu ◽  
Arjun Sharma ◽  
...  

AbstractWe developed a rich dataset of Chest X-Ray (CXR) images to assist investigators in artificial intelligence. The data were collected using an eye-tracking system while a radiologist reviewed and reported on 1,083 CXR images. The dataset contains the following aligned data: CXR image, transcribed radiology report text, radiologist’s dictation audio and eye gaze coordinates data. We hope this dataset can contribute to various areas of research particularly towards explainable and multimodal deep learning/machine learning methods. Furthermore, investigators in disease classification and localization, automated radiology report generation, and human-machine interaction can benefit from these data. We report deep learning experiments that utilize the attention maps produced by the eye gaze dataset to show the potential utility of this dataset.


2021 ◽  
Vol 7 ◽  
pp. e541
Author(s):  
Jing Xu ◽  
Hui Li ◽  
Xiu Li

The chest X-ray is one of the most common radiological examination types for the diagnosis of chest diseases. Nowadays, the automatic classification technology of radiological images has been widely used in clinical diagnosis and treatment plans. However, each disease has its own different response characteristic receptive field region, which is the main challenge for chest disease classification tasks. Besides, the imbalance of sample data categories further increases the difficulty of tasks. To solve these problems, we propose a new multi-label chest disease image classification scheme based on a multi-scale attention network. In this scheme, multi-scale information is iteratively fused to focus on regions with a high probability of disease, to effectively mine more meaningful information from data. A novel loss function is also designed to improve the rationality of visual perception and multi-label image classification, which forces the consistency of attention regions before and after image transformation. A comprehensive experiment was carried out on the Chest X-Ray14 and CheXpert datasets, separately containing over 100,000 frontal-view and 200,000 front and side view X-ray images with 14 diseases. The AUROC is 0.850 and 0.815 respectively on the two data sets, which achieve the state-of-the-art results, verified the effectiveness of this method in chest X-ray image classification. This study has important practical significance for using AI algorithms to assist radiologists in improving work efficiency and diagnostic accuracy.


2021 ◽  
Vol 30 ◽  
pp. 2476-2487
Author(s):  
Qingji Guan ◽  
Yaping Huang ◽  
Yawei Luo ◽  
Ping Liu ◽  
Mingliang Xu ◽  
...  

2021 ◽  
Author(s):  
Soumava Dey ◽  
Gunther Correia Bacellar ◽  
Mallikarjuna Basappa Chandrappa ◽  
Raj Kulkarni

The rise of the coronavirus disease 2019 (COVID-19) pandemic has made it necessary to improve existing medical screening and clinical management of this disease. While COVID-19 patients are known to exhibit a variety of symptoms, the major symptoms include fever, cough, and fatigue. Since these symptoms also appear in pneumonia patients, this creates complications in COVID-19 detection especially during the flu season. Early studies identified abnormalities in chest X-ray images of COVID-19 infected patients that could be beneficial for disease diagnosis. Therefore, chest X-ray image-based disease classification has emerged as an alternative to aid medical diagnosis. However, manual detection of COVID-19 from a set of chest X-ray images comprising both COVID-19 and pneumonia cases is cumbersome and prone to human error. Thus, artificial intelligence techniques powered by deep learning algorithms, which learn from radiography images and predict presence of COVID-19 have potential to enhance current diagnosis process. Towards this purpose, here we implemented a set of deep learning pre-trained models such as ResNet, VGG, Inception and EfficientNet in conjunction with developing a computer vision AI system based on our own convolutional neural network (CNN) model: Deep Learning in Healthcare (DLH)-COVID. All these CNN models cater to image classification exercise. We used publicly available resources of 6,432 images and further strengthened our model by tuning hyperparameters to provide better generalization during the model validation phase. Our final DLH-COVID model yielded the highest accuracy of 96% in detection of COVID-19 from chest X-ray images when compared to images of both pneumonia-affected and healthy individuals. Given the practicality of acquiring chest X-ray images by patients, we also developed a web application (link: https://toad.li/xray) based on our model to directly enable users to upload chest X-ray images and detect the presence of COVID-19 within a few seconds. Taken together, here we introduce a state-of-the-art artificial intelligence-based system for efficient COVID-19 detection and a user-friendly application that has the capacity to become a rapid COVID-19 diagnosis method in the near future.


2021 ◽  
Vol 7 ◽  
pp. e738
Author(s):  
Mumtaz Ali ◽  
Riaz Ali

Conventionally, convolutional neural networks (CNNs) have been used to identify and detect thorax diseases on chest x-ray images. To identify thorax diseases, CNNs typically learn two types of information: disease-specific features and generic anatomical features. CNNs focus on disease-specific features while ignoring the rest of the anatomical features during their operation. There is no evidence that generic anatomical features improve or worsen the performance of convolutional neural networks for thorax disease classification in the current research. As a consequence, the relevance of general anatomical features in boosting the performance of CNNs for thorax disease classification is investigated in this study. We employ a dual-stream CNN model to learn anatomical features before training the model for thorax disease classification. The dual-stream technique is used to compel the model to learn structural information because initial layers of CNNs often learn features of edges and boundaries. As a result, a dual-stream model with minimal layers learns structural and anatomical features as a priority. To make the technique more comprehensive, we first train the model to identify gender and age and then classify thorax diseases using the information acquired. Only when the model learns the anatomical features can it detect gender and age. We also use Non-negative Matrix Factorization (NMF) and Contrast Limited Adaptive Histogram Equalization (CLAHE) to pre-process the training data, which suppresses disease-related information while amplifying general anatomical features, allowing the model to acquire anatomical features considerably faster. Finally, the model that was earlier trained for gender and age detection is retrained for thorax disease classification using original data. The proposed technique increases the performance of convolutional neural networks for thorax disease classification, as per experiments on the Chest X-ray14 dataset. We can also see the significant parts of the image that contribute more for gender, age, and a certain thorax disease by visualizing the features. The proposed study achieves two goals: first, it produces novel gender and age identification results on chest X-ray images that may be used in biometrics, forensics, and anthropology, and second, it highlights the importance of general anatomical features in thorax disease classification. In comparison to state-of-the-art results, the proposed work also produces competitive results.


Sign in / Sign up

Export Citation Format

Share Document