Quick and accurate selection of hand images among radiographs from various body parts using deep learning

2020 ◽  
Vol 28 (6) ◽  
pp. 1199-1206
Author(s):  
Kohei Fujiwara ◽  
Wanxuan Fang ◽  
Taichi Okino ◽  
Kenneth Sutherland ◽  
Akira Furusaki ◽  
...  

BACKGROUND: Although rheumatoid arthritis (RA) causes destruction of articular cartilage, early treatment significantly improves symptoms and delays progression. It is important to detect subtle damage for an early diagnosis. Recent software programs are comparable with the conventional human scoring method regarding detectability of the radiographic progression of RA. Thus, automatic and accurate selection of relevant images (e.g. hand images) among radiographic images of various body parts is necessary for serial analysis on a large scale. OBJECTIVE: In this study we examined whether deep learning can select target images from a large number of stored images retrieved from a picture archiving and communication system (PACS) including miscellaneous body parts of patients. METHODS: We selected 1,047 X-ray images including various body parts and divided them into two groups: 841 images for training and 206 images for testing. The training images were augmented and used to train a convolutional neural network (CNN) consisting of 4 convolution layers, 2 pooling layers and 2 fully connected layers. After training, we created software to classify the test images and examined the accuracy. RESULTS: The image extraction accuracy was 0.952 and 0.979 for unilateral hand and both hands, respectively. In addition, all 206 test images were perfectly classified into unilateral hand, both hands, and the others. CONCLUSIONS: Deep learning showed promise to enable efficiently automatic selection of target X-ray images of RA patients.

2021 ◽  
Author(s):  
Hieu H. Pham ◽  
Dung V. Do ◽  
Ha Q. Nguyen

AbstractX-ray imaging in Digital Imaging and Communications in Medicine (DICOM) format is the most commonly used imaging modality in clinical practice, resulting in vast, non-normalized databases. This leads to an obstacle in deploying artificial intelligence (AI) solutions for analyzing medical images, which often requires identifying the right body part before feeding the image into a specified AI model. This challenge raises the need for an automated and efficient approach to classifying body parts from X-ray scans. Unfortunately, to the best of our knowledge, there is no open tool or framework for this task to date. To fill this lack, we introduce a DICOM Imaging Router that deploys deep convolutional neural networks (CNNs) for categorizing unknown DICOM X-ray images into five anatomical groups: abdominal, adult chest, pediatric chest, spine, and others. To this end, a large-scale X-ray dataset consisting of 16,093 images has been collected and manually classified. We then trained a set of state-of-the-art deep CNNs using a training set of 11,263 images. These networks were then evaluated on an independent test set of 2,419 images and showed superior performance in classifying the body parts. Specifically, our best performing model (i.e., MobileNet-V1) achieved a recall of 0.982 (95% CI, 0.977– 0.988), a precision of 0.985 (95% CI, 0.975–0.989) and a F1-score of 0.981 (95% CI, 0.976–0.987), whilst requiring less computation for inference (0.0295 second per image). Our external validity on 1,000 X-ray images shows the robustness of the proposed approach across hospitals. These remarkable performances indicate that deep CNNs can accurately and effectively differentiate human body parts from X-ray scans, thereby providing potential benefits for a wide range of applications in clinical settings. The dataset, codes, and trained deep learning models from this study will be made publicly available on our project website at https://vindr.ai/datasets/bodypartxr.


2021 ◽  
Vol 11 (6) ◽  
pp. 2723
Author(s):  
Fatih Uysal ◽  
Fırat Hardalaç ◽  
Ozan Peker ◽  
Tolga Tolunay ◽  
Nil Tokgöz

Fractures occur in the shoulder area, which has a wider range of motion than other joints in the body, for various reasons. To diagnose these fractures, data gathered from X-radiation (X-ray), magnetic resonance imaging (MRI), or computed tomography (CT) are used. This study aims to help physicians by classifying shoulder images taken from X-ray devices as fracture/non-fracture with artificial intelligence. For this purpose, the performances of 26 deep learning-based pre-trained models in the detection of shoulder fractures were evaluated on the musculoskeletal radiographs (MURA) dataset, and two ensemble learning models (EL1 and EL2) were developed. The pre-trained models used are ResNet, ResNeXt, DenseNet, VGG, Inception, MobileNet, and their spinal fully connected (Spinal FC) versions. In the EL1 and EL2 models developed using pre-trained models with the best performance, test accuracy was 0.8455, 0.8472, Cohen’s kappa was 0.6907, 0.6942 and the area that was related with fracture class under the receiver operating characteristic (ROC) curve (AUC) was 0.8862, 0.8695. As a result of 28 different classifications in total, the highest test accuracy and Cohen’s kappa values were obtained in the EL2 model, and the highest AUC value was obtained in the EL1 model.


2020 ◽  
Vol 498 (4) ◽  
pp. 5620-5628
Author(s):  
Y Su ◽  
Y Zhang ◽  
G Liang ◽  
J A ZuHone ◽  
D J Barnes ◽  
...  

ABSTRACT The origin of the diverse population of galaxy clusters remains an unexplained aspect of large-scale structure formation and cluster evolution. We present a novel method of using X-ray images to identify cool core (CC), weak cool core (WCC), and non-cool core (NCC) clusters of galaxies that are defined by their central cooling times. We employ a convolutional neural network, ResNet-18, which is commonly used for image analysis, to classify clusters. We produce mock Chandra X-ray observations for a sample of 318 massive clusters drawn from the IllustrisTNG simulations. The network is trained and tested with low-resolution mock Chandra images covering a central 1 Mpc square for the clusters in our sample. Without any spectral information, the deep learning algorithm is able to identify CC, WCC, and NCC clusters, achieving balanced accuracies (BAcc) of 92 per cent, 81 per cent, and 83 per cent, respectively. The performance is superior to classification by conventional methods using central gas densities, with an average ${\rm BAcc}=81{{\ \rm per\ cent}}$, or surface brightness concentrations, giving ${\rm BAcc}=73{{\ \rm per\ cent}}$. We use class activation mapping to localize discriminative regions for the classification decision. From this analysis, we observe that the network has utilized regions from cluster centres out to r ≈ 300 kpc and r ≈ 500 kpc to identify CC and NCC clusters, respectively. It may have recognized features in the intracluster medium that are associated with AGN feedback and disruptive major mergers.


2021 ◽  
Vol 4 (2) ◽  
pp. 147-153
Author(s):  
Vina Ayumi ◽  
Ida Nurhaida

Deteksi dini terhadap adanya indikasi pasien dengan gejala COVID-19 perlu dilakukan untuk mengurangi penyebaran virus. Salah satu cara yang dapat dilakukan untuk mendeteksi virus COVID-19 adalah dengan cara mempelajari citra chest x-ray pasien dengan gejala Covid-19. Citra chest x-ray dianggap mampu menggambarkan kondisi paru-paru pasien COVID-19 sebagai alat bantu untuk diagnosa klinis. Penelitian ini mengusulkan pendekatan deep learning berbasis convolutional neural network (CNN) untuk klasifikasi gejala COVID-19 melalui citra chest X-Ray. Evaluasi performa metode yang diusulkan akan menggunakan perhitungan accuracy, precision, recall, f1-score, dan cohens kappa. Penelitian ini menggunakan model CNN dengan 2 lapis layer convolusi dan maxpoling serta fully-connected layer untuk output. Parameter-parameter yang digunakan diantaranya batch_size = 32, epoch = 50, learning_rate = 0.001, dengan optimizer yaitu Adam. Nilai akurasi validasi (val_acc) terbaik diperoleh pada epoch ke-49 dengan nilai 0.9606, nilai loss validasi (val_loss) 0.1471, akurasi training (acc) 0.9405, dan loss training (loss) 0.2558.


2021 ◽  
Vol 45 (1) ◽  
pp. 149-153
Author(s):  
V.G. Efremtsev ◽  
N.G. Efremtsev ◽  
E.P. Teterin ◽  
P.E. Teterin ◽  
E.S. Bazavluk

The use of neural networks to detect differences in radiographic images of patients with pneu-monia and COVID-19 is demonstrated. For the optimal selection of resize and neural network ar-chitecture parameters, hyperparameters, and adaptive image brightness adjustment, precision, recall, and f1-score metrics are used. The high values of these metrics of classification quality (> 0.91) strongly indicate a reliable difference between radiographic images of patients with pneumonia and patients with COVID-19, which opens up the possibility of creating a model with good predictive ability without involving ready-to-use complex models and without pre-training on third-party data, which is promising for the development of sensitive and reliable COVID-19 express-diagnostic methods.


2021 ◽  
pp. 100034
Author(s):  
Adeyinka P. Adedigba ◽  
Steve A. Adeshina ◽  
Oluwatomisin E. Aina ◽  
Abiodun M. Aibinu

Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3482
Author(s):  
Abdullah-Al Nahid ◽  
Niloy Sikder ◽  
Anupam Kumar Bairagi ◽  
Md. Abdur Razzaque ◽  
Mehedi Masud ◽  
...  

Pneumonia is a virulent disease that causes the death of millions of people around the world. Every year it kills more children than malaria, AIDS, and measles combined and it accounts for approximately one in five child-deaths worldwide. The invention of antibiotics and vaccines in the past century has notably increased the survival rate of Pneumonia patients. Currently, the primary challenge is to detect the disease at an early stage and determine its type to initiate the appropriate treatment. Usually, a trained physician or a radiologist undertakes the task of diagnosing Pneumonia by examining the patient’s chest X-ray. However, the number of such trained individuals is nominal when compared to the 450 million people who get affected by Pneumonia every year. Fortunately, this challenge can be met by introducing modern computers and improved Machine Learning techniques in Pneumonia diagnosis. Researchers have been trying to develop a method to automatically detect Pneumonia using machines by analyzing and the symptoms of the disease and chest radiographic images of the patients for the past two decades. However, with the development of cogent Deep Learning algorithms, the formation of such an automatic system is very much within the realms of possibility. In this paper, a novel diagnostic method has been proposed while using Image Processing and Deep Learning techniques that are based on chest X-ray images to detect Pneumonia. The method has been tested on a widely used chest radiography dataset, and the obtained results indicate that the model is very much potent to be employed in an automatic Pneumonia diagnosis scheme.


Author(s):  
Ishtiaque Ahmed ◽  
◽  
Manan Darda ◽  
Neha Tikyani ◽  
Rachit Agrawal ◽  
...  

The COVID-19 pandemic has caused large-scale outbreaks in more than 150 countries worldwide, causing massive damage to the livelihood of many people. The capacity to identify contaminated patients early and get unique treatment is quite possibly the primary stride in the battle against COVID-19. One of the quickest ways to diagnose patients is to use radiography and radiology images to detect the disease. Early studies have shown that chest X-rays of patients infected with COVID-19 have unique abnormalities. To identify COVID-19 patients from chest X-ray images, we used various deep learning models based on previous studies. We first compiled a data set of 2,815 chest radiographs from public sources. The model produces reliable and stable results with an accuracy of 91.6%, a Positive Predictive Value of 80%, a Negative Predictive Value of 100%, specificity of 87.50%, and Sensitivity of 100%. It is observed that the CNN-based architecture can diagnose COVID19 disease. The parameters’ outcomes can be further improved by increasing the dataset size and by developing the CNN-based architecture for training the model.


2020 ◽  
Vol 42 ◽  
Author(s):  
André Dantas de Medeiros ◽  
Maycon Silva Martins ◽  
Laércio Junio da Silva ◽  
Márcio Dias Pereira ◽  
Manuel Jesús Zavala León ◽  
...  

Abstract: Non-destructive and high throughput methods have been developed for seed quality evaluation. The aim of this study was to relate parameters obtained from the free and automated analysis of digital radiographs of hybrid melons’ seeds to their seeds’ physiological potential. Seeds of three hybrid melon (Cucumis melo L.) cultivars from commercial lot samples were used. Radiographic images of the seeds were obtained, from which area, perimeter, circularity, relative density, integrated density and seed filling measurements were generated by means of a macro (PhenoXray) developed for ImageJ® software. After the X-ray test, seed samples were submitted to the germination test, from which variables related to the physiological quality of the seeds were obtained. Variability between lots was observed for both physical and physiological characteristics. Results showed that the use of the PhenoXray macro allows large-scale phenotyping of seed radiographs in a simple, fast, consistent and completely free way. The methodology is efficient in obtaining morphometric and tissue integrity data of melon seeds and the generated parameters are closely related to physiological attributes of seed quality.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Surya Krishnamurthy ◽  
Kathiravan Srinivasan ◽  
Saeed Mian Qaisar ◽  
P. M. Durai Raj Vincent ◽  
Chuan-Yu Chang

Pneumonitis is an infectious disease that causes the inflammation of the air sac. It can be life-threatening to the very young and elderly. Detection of pneumonitis from X-ray images is a significant challenge. Early detection and assistance with diagnosis can be crucial. Recent developments in the field of deep learning have significantly improved their performance in medical image analysis. The superior predictive performance of the deep learning methods makes them ideal for pneumonitis classification from chest X-ray images. However, training deep learning models can be cumbersome and resource-intensive. Reusing knowledge representations of public models trained on large-scale datasets through transfer learning can help alleviate these challenges. In this paper, we compare various image classification models based on transfer learning with well-known deep learning architectures. The Kaggle chest X-ray dataset was used to evaluate and compare our models. We apply basic data augmentation and fine-tune our feed-forward classification head on the models pretrained on the ImageNet dataset. We observed that the DenseNet201 model outperforms other models with an AUROC score of 0.966 and a recall score of 0.99. We also visualize the class activation maps from the DenseNet201 model to interpret the patterns recognized by the model for prediction.


Sign in / Sign up

Export Citation Format

Share Document