scholarly journals Gender and age detection assist convolutional neural networks in classification of thorax diseases

2021 ◽  
Vol 7 ◽  
pp. e738
Author(s):  
Mumtaz Ali ◽  
Riaz Ali

Conventionally, convolutional neural networks (CNNs) have been used to identify and detect thorax diseases on chest x-ray images. To identify thorax diseases, CNNs typically learn two types of information: disease-specific features and generic anatomical features. CNNs focus on disease-specific features while ignoring the rest of the anatomical features during their operation. There is no evidence that generic anatomical features improve or worsen the performance of convolutional neural networks for thorax disease classification in the current research. As a consequence, the relevance of general anatomical features in boosting the performance of CNNs for thorax disease classification is investigated in this study. We employ a dual-stream CNN model to learn anatomical features before training the model for thorax disease classification. The dual-stream technique is used to compel the model to learn structural information because initial layers of CNNs often learn features of edges and boundaries. As a result, a dual-stream model with minimal layers learns structural and anatomical features as a priority. To make the technique more comprehensive, we first train the model to identify gender and age and then classify thorax diseases using the information acquired. Only when the model learns the anatomical features can it detect gender and age. We also use Non-negative Matrix Factorization (NMF) and Contrast Limited Adaptive Histogram Equalization (CLAHE) to pre-process the training data, which suppresses disease-related information while amplifying general anatomical features, allowing the model to acquire anatomical features considerably faster. Finally, the model that was earlier trained for gender and age detection is retrained for thorax disease classification using original data. The proposed technique increases the performance of convolutional neural networks for thorax disease classification, as per experiments on the Chest X-ray14 dataset. We can also see the significant parts of the image that contribute more for gender, age, and a certain thorax disease by visualizing the features. The proposed study achieves two goals: first, it produces novel gender and age identification results on chest X-ray images that may be used in biometrics, forensics, and anthropology, and second, it highlights the importance of general anatomical features in thorax disease classification. In comparison to state-of-the-art results, the proposed work also produces competitive results.

Author(s):  
Puneet Gupta

Abstract— Pneumonia is a life-threatening infectious disease affecting one or both lungs in humans commonly caused by bacteria called Streptococcus pneumoniae. One in three deaths in India is caused due to pneumonia as reported by World Health Organization (WHO). Chest X-Rays which are used to diagnose pneumonia, need expert radiotherapists for evaluation. Thus, developing an automatic system for detecting pneumonia would be beneficial for treating the disease without any delay particularly in remote areas. Due to the success of deep learning algorithms in analyzing medical images, Convolutional Neural Networks (CNNs) have gained much attention for disease classification. In addition, features learned by pre-trained CNN models on large-scale datasets are much useful in image classification tasks. In this work, we appraise the functionality of pre-trained CNN models utilized as feature-extractors followed by different classifiers for the classification of abnormal and normal chest X-Rays. We analytically determine the optimal CNN model for the purpose. Statistical results obtained demonstrates that pretrained CNN models employed along with supervised classifier algorithms can be very beneficial in analyzing chest X-ray images, specifically to detect Pneumonia. In this project Transfer learning and a CNN Model is used to detect whether the person has pneumonia or not using chest x-ray.


2019 ◽  
Vol 38 (5) ◽  
pp. 1197-1206 ◽  
Author(s):  
Hojjat Salehinejad ◽  
Errol Colak ◽  
Tim Dowdell ◽  
Joseph Barfett ◽  
Shahrokh Valaee

Proceedings ◽  
2020 ◽  
Vol 54 (1) ◽  
pp. 31
Author(s):  
Joaquim de Moura ◽  
Lucía Ramos ◽  
Plácido L. Vidal ◽  
Jorge Novo ◽  
Marcos Ortega

The new coronavirus (COVID-19) is a disease that is caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). On 11 March 2020, the coronavirus outbreak has been labelled a global pandemic by the World Health Organization. In this context, chest X-ray imaging has become a remarkably powerful tool for the identification of patients with COVID-19 infections at an early stage when clinical symptoms may be unspecific or sparse. In this work, we propose a complete analysis of separability of COVID-19 and pneumonia in chest X-ray images by means of Convolutional Neural Networks. Satisfactory results were obtained that demonstrated the suitability of the proposed system, improving the efficiency of the medical screening process in the healthcare systems.


Author(s):  
René Hosch ◽  
Lennard Kroll ◽  
Felix Nensa ◽  
Sven Koitka

Purpose Detection and validation of the chest X-ray view position with use of convolutional neural networks to improve meta-information for data cleaning within a hospital data infrastructure. Material and Methods Within this paper we developed a convolutional neural network which automatically detects the anteroposterior and posteroanterior view position of a chest radiograph. We trained two different network architectures (VGG variant and ResNet-34) with data published by the RSNA (26 684 radiographs, class distribution 46 % AP, 54 % PA) and validated these on a self-compiled dataset with data from the University Hospital Essen (4507, radiographs, class distribution 55 % PA, 45 % AP) labeled by a human reader. For visualization and better understanding of the network predictions, a Grad-CAM was generated for each network decision. The network results were evaluated based on the accuracy, the area under the curve (AUC), and the F1-score against the human reader labels. Also a final performance comparison between model predictions and DICOM labels was performed. Results The ensemble models reached accuracy and F1-scores greater than 95 %. The AUC reaches more than 0.99 for the ensemble models. The Grad-CAMs provide insight as to which anatomical structures contributed to a decision by the networks which are comparable with the ones a radiologist would use. Furthermore, the trained models were able to generalize over mislabeled examples, which was found by comparing the human reader labels to the predicted labels as well as the DICOM labels. Conclusion The results show that certain incorrectly entered meta-information of radiological images can be effectively corrected by deep learning in order to increase data quality in clinical application as well as in research. Key Points:  Citation Format


2020 ◽  
Vol 25 (6) ◽  
pp. 553-565 ◽  
Author(s):  
Boran Sekeroglu ◽  
Ilker Ozsahin

The detection of severe acute respiratory syndrome coronavirus 2 (SARS CoV-2), which is responsible for coronavirus disease 2019 (COVID-19), using chest X-ray images has life-saving importance for both patients and doctors. In addition, in countries that are unable to purchase laboratory kits for testing, this becomes even more vital. In this study, we aimed to present the use of deep learning for the high-accuracy detection of COVID-19 using chest X-ray images. Publicly available X-ray images (1583 healthy, 4292 pneumonia, and 225 confirmed COVID-19) were used in the experiments, which involved the training of deep learning and machine learning classifiers. Thirty-eight experiments were performed using convolutional neural networks, 10 experiments were performed using five machine learning models, and 14 experiments were performed using the state-of-the-art pre-trained networks for transfer learning. Images and statistical data were considered separately in the experiments to evaluate the performances of models, and eightfold cross-validation was used. A mean sensitivity of 93.84%, mean specificity of 99.18%, mean accuracy of 98.50%, and mean receiver operating characteristics–area under the curve scores of 96.51% are achieved. A convolutional neural network without pre-processing and with minimized layers is capable of detecting COVID-19 in a limited number of, and in imbalanced, chest X-ray images.


Sign in / Sign up

Export Citation Format

Share Document