scholarly journals Fast and interpretable classification of small X-ray diffraction datasets using data augmentation and deep neural networks

2019 ◽  
Vol 5 (1) ◽  
Author(s):  
Felipe Oviedo ◽  
Zekun Ren ◽  
Shijing Sun ◽  
Charles Settens ◽  
Zhe Liu ◽  
...  
2021 ◽  
Vol 11 (1) ◽  
pp. 28
Author(s):  
Ivan Lorencin ◽  
Sandi Baressi Šegota ◽  
Nikola Anđelić ◽  
Anđela Blagojević ◽  
Tijana Šušteršić ◽  
...  

COVID-19 represents one of the greatest challenges in modern history. Its impact is most noticeable in the health care system, mostly due to the accelerated and increased influx of patients with a more severe clinical picture. These facts are increasing the pressure on health systems. For this reason, the aim is to automate the process of diagnosis and treatment. The research presented in this article conducted an examination of the possibility of classifying the clinical picture of a patient using X-ray images and convolutional neural networks. The research was conducted on the dataset of 185 images that consists of four classes. Due to a lower amount of images, a data augmentation procedure was performed. In order to define the CNN architecture with highest classification performances, multiple CNNs were designed. Results show that the best classification performances can be achieved if ResNet152 is used. This CNN has achieved AUCmacro¯ and AUCmicro¯ up to 0.94, suggesting the possibility of applying CNN to the classification of the clinical picture of COVID-19 patients using an X-ray image of the lungs. When higher layers are frozen during the training procedure, higher AUCmacro¯ and AUCmicro¯ values are achieved. If ResNet152 is utilized, AUCmacro¯ and AUCmicro¯ values up to 0.96 are achieved if all layers except the last 12 are frozen during the training procedure.


2020 ◽  
Author(s):  
Elisha Goldstein ◽  
Daphna Keidar ◽  
Daniel Yaron ◽  
Yair Shachar ◽  
Ayelet Blass ◽  
...  

AbstractBackgroundIn the midst of the coronavirus disease 2019 (COVID-19) outbreak, chest X-ray (CXR) imaging is playing an important role in the diagnosis and monitoring of patients with COVID-19. Machine learning solutions have been shown to be useful for X-ray analysis and classification in a range of medical contexts.PurposeThe purpose of this study is to create and evaluate a machine learning model for diagnosis of COVID-19, and to provide a tool for searching for similar patients according to their X-ray scans.Materials and MethodsIn this retrospective study, a classifier was built using a pre-trained deep learning model (ReNet50) and enhanced by data augmentation and lung segmentation to detect COVID-19 in frontal CXR images collected between January 2018 and July 2020 in four hospitals in Israel. A nearest-neighbors algorithm was implemented based on the network results that identifies the images most similar to a given image. The model was evaluated using accuracy, sensitivity, area under the curve (AUC) of receiver operating characteristic (ROC) curve and of the precision-recall (P-R) curve.ResultsThe dataset sourced for this study includes 2362 CXRs, balanced for positive and negative COVID-19, from 1384 patients (63 +/- 18 years, 552 men). Our model achieved 89.7% (314/350) accuracy and 87.1% (156/179) sensitivity in classification of COVID-19 on a test dataset comprising 15% (350 of 2326) of the original data, with AUC of ROC 0.95 and AUC of the P-R curve 0.94. For each image we retrieve images with the most similar DNN-based image embeddings; these can be used to compare with previous cases.ConclusionDeep Neural Networks can be used to reliably classify CXR images as COVID-19 positive or negative. Moreover, the image embeddings learned by the network can be used to retrieve images with similar lung findings.SummaryDeep Neural Networks and can be used to reliably predict chest X-ray images as positive for coronavirus disease 2019 (COVID-19) or as negative for COVID-19.Key ResultsA machine learning model was able to detect chest X-ray (CXR) images of patients tested positive for coronavirus disease 2019 with accuracy of 89.7%, sensitivity of 87.1% and area under receiver operating characteristic curve of 0.95.A tool was created for finding existing CXR images with imaging characteristics most similar to a given CXR, according to the model’s image embeddings.


Author(s):  
Daphna Keidar ◽  
Daniel Yaron ◽  
Elisha Goldstein ◽  
Yair Shachar ◽  
Ayelet Blass ◽  
...  

2021 ◽  
Author(s):  
◽  
Lucas Ribeiro de Abreu

The RoboCup Soccer is one of the largest initiatives in the robotics field of research. This initiative considers the soccer match as a challenge for the robots and aims to win a match between humans versus robots by the year of 2050. The vision module is a critical system for the robots because it needs to quickly locate and classify objects of interest for the robot in order to generate the next best action. This work evaluates deep neural networks for the detection of the ball and robots. For such task, five convolutional neural networks architectures were trained for the experiment using data augmentation and transfer learning techniques. The models were evaluated in a test set, yielding promising results in precision and frames per second. The best model achieved an mAP of 0.98 and 14.7 frames per second, running on CPU


2021 ◽  
Vol 11 (18) ◽  
pp. 8361
Author(s):  
Hye-jin Shim ◽  
Jee-weon Jung ◽  
Ju-ho Kim ◽  
Ha-jin Yu

Acoustic scene classification contains frequently misclassified pairs of classes that share many common acoustic properties. Specific details can provide vital clues for distinguishing such pairs of classes. However, these details are generally not noticeable and are hard to generalize for different data distributions. In this study, we investigate various methods for capturing discriminative information and simultaneously improve the generalization ability. We adopt a max feature map method that replaces conventional non-linear activation functions in deep neural networks; therefore, we apply an element-wise comparison between the different filters of a convolution layer’s output. Two data augmentation methods and two deep architecture modules are further explored to reduce overfitting and sustain the system’s discriminative power. Various experiments are conducted using the “detection and classification of acoustic scenes and events 2020 task1-a” dataset to validate the proposed methods. Our results show that the proposed system consistently outperforms the baseline, where the proposed system demonstrates an accuracy of 70.4% compared to the baseline at 65.1%.


Author(s):  
Alex Hernández-García ◽  
Johannes Mehrer ◽  
Nikolaus Kriegeskorte ◽  
Peter König ◽  
Tim C. Kietzmann

Sign in / Sign up

Export Citation Format

Share Document