scholarly journals Detecting Ankle Fractures in Plain Radiographs Using Deep Learning with Accurately Labeled Datasets Aided by Computed Tomography: A Retrospective Observational Study

2021 ◽  
Vol 11 (19) ◽  
pp. 8791
Author(s):  
Ji-Hun Kim ◽  
Yong-Cheol Mo ◽  
Seung-Myung Choi ◽  
Youk Hyun ◽  
Jung Woo Lee

Ankle fractures are common and, compared to other injuries, tend to be overlooked in the emergency department. We aim to develop a deep learning algorithm that can detect not only definite fractures but also obscure fractures. We collected the data of 1226 patients with suspected ankle fractures and performed both X-rays and CT scans. With anteroposterior (AP) and lateral ankle X-rays of 1040 patients with fractures and 186 normal patients, we developed a deep learning model. The training, validation, and test datasets were split in a 3/1/1 ratio. Data augmentation and under-sampling techniques were administered as part of the preprocessing. The Inception V3 model was utilized for the image classification. Performance of the model was validated using a confusion matrix and the area under the receiver operating characteristic curve (AUC-ROC). For the AP and lateral trials, the best accuracy and AUC values were 83%/0.91 in AP and 90%/0.95 in lateral. Additionally, the mean accuracy and AUC values were 83%/0.89 for the AP trials and 83%/0.9 for the lateral trials. The reliable dataset resulted in the CNN model providing higher accuracy than in past studies.

Diagnostics ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1127
Author(s):  
Ji Hyung Nam ◽  
Dong Jun Oh ◽  
Sumin Lee ◽  
Hyun Joo Song ◽  
Yun Jeong Lim

Capsule endoscopy (CE) quality control requires an objective scoring system to evaluate the preparation of the small bowel (SB). We propose a deep learning algorithm to calculate SB cleansing scores and verify the algorithm’s performance. A 5-point scoring system based on clarity of mucosal visualization was used to develop the deep learning algorithm (400,000 frames; 280,000 for training and 120,000 for testing). External validation was performed using additional CE cases (n = 50), and average cleansing scores (1.0 to 5.0) calculated using the algorithm were compared to clinical grades (A to C) assigned by clinicians. Test results obtained using 120,000 frames exhibited 93% accuracy. The separate CE case exhibited substantial agreement between the deep learning algorithm scores and clinicians’ assessments (Cohen’s kappa: 0.672). In the external validation, the cleansing score decreased with worsening clinical grade (scores of 3.9, 3.2, and 2.5 for grades A, B, and C, respectively, p < 0.001). Receiver operating characteristic curve analysis revealed that a cleansing score cut-off of 2.95 indicated clinically adequate preparation. This algorithm provides an objective and automated cleansing score for evaluating SB preparation for CE. The results of this study will serve as clinical evidence supporting the practical use of deep learning algorithms for evaluating SB preparation quality.


2020 ◽  
Author(s):  
Tuan Pham

Chest X-rays have been found to be very promising for assessing COVID-19 patients, especially for resolving emergency-department and urgent-care-center overcapacity. Deep-learning (DL) methods in artificial intelligence (AI) play a dominant role as high-performance classifiers in the detection of the disease using chest X-rays. While many new DL models have been being developed for this purpose, this study aimed to investigate the fine tuning of pretrained convolutional neural networks (CNNs) for the classification of COVID-19 using chest X-rays. Three pretrained CNNs, which are AlexNet, GoogleNet, and SqueezeNet, were selected and fine-tuned without data augmentation to carry out 2-class and 3-class classification tasks using 3 public chest X-ray databases. In comparison with other recently developed DL models, the 3 pretrained CNNs achieved very high classification results in terms of accuracy, sensitivity, specificity, precision, F1 score, and area under the receiver-operating-characteristic curve. AlexNet, GoogleNet, and SqueezeNet require the least training time among pretrained DL models, but with suitable selection of training parameters, excellent classification results can be achieved without data augmentation by these networks. The findings contribute to the urgent need for harnessing the pandemic by facilitating the deployment of AI tools that are fully automated and readily available in the public domain for rapid implementation.


2021 ◽  
Author(s):  
Tirupathi Karthik ◽  
Vijayalakshmi Kasiraman ◽  
Bhavani Paski ◽  
Kashyap Gurram ◽  
Amit Talwar ◽  
...  

Background and aims: Chest X-rays are widely used, non-invasive, cost effective imaging tests. However, the complexity of interpretation and global shortage of radiologists have led to reporting backlogs, delayed diagnosis and a compromised quality of care. A fully automated, reliable artificial intelligence system that can quickly triage abnormal images for urgent radiologist review would be invaluable in the clinical setting. The aim was to develop and validate a deep learning Convoluted Neural Network algorithm to automate the detection of 13 common abnormalities found on Chest X-rays. Method: In this retrospective study, a VGG 16 deep learning model was trained on images from the Chest-ray 14, a large publicly available Chest X-ray dataset, containing over 112,120 images with annotations. Images were split into training, validation and testing sets and trained to identify 13 specific abnormalities. The primary performance measures were accuracy and precision. Results: The model demonstrated an overall accuracy of 88% in the identification of abnormal X-rays and 87% in the detection of 13 common chest conditions with no model bias. Conclusion: This study demonstrates that a well-trained deep learning algorithm can accurately identify multiple abnormalities on X-ray images. As such models get further refined, they can be used to ease radiology workflow bottlenecks and improve reporting efficiency. Napier Healthcare’s team that developed this model consists of medical IT professionals who specialize in AI and its practical application in acute & long-term care settings. This is currently being piloted in a few hospitals and diagnostic labs on a commercial basis.


2020 ◽  
Author(s):  
S. Duchesne ◽  
D. Gourdeau ◽  
P. Archambault ◽  
C. Chartrand-Lefebvre ◽  
L. Dieumegarde ◽  
...  

ABSTRACTBackgroundDecision scores and ethically mindful algorithms are being established to adjudicate mechanical ventilation in the context of potential resources shortage due to the current onslaught of COVID-19 cases. There is a need for a reproducible and objective method to provide quantitative information for those scores.PurposeTowards this goal, we present a retrospective study testing the ability of a deep learning algorithm at extracting features from chest x-rays (CXR) to track and predict radiological evolution.Materials and MethodsWe trained a repurposed deep learning algorithm on the CheXnet open dataset (224,316 chest X-ray images of 65,240 unique patients) to extract features that mapped to radiological labels. We collected CXRs of COVID-19-positive patients from two open-source datasets (last accessed on April 9, 2020)(Italian Society for Medical and Interventional Radiology and MILA). Data collected form 60 pairs of sequential CXRs from 40 COVID patients (mean age ± standard deviation: 56 ± 13 years; 23 men, 10 women, seven not reported) and were categorized in three categories: “Worse”, “Stable”, or “Improved” on the basis of radiological evolution ascertained from images and reports. Receiver operating characteristic analyses, Mann-Whitney tests were performed.ResultsOn patients from the CheXnet dataset, the area under ROC curves ranged from 0.71 to 0.93 for seven imaging features and one diagnosis. Deep learning features between “Worse” and “Improved” outcome categories were significantly different for three radiological signs and one diagnostic (“Consolidation”, “Lung Lesion”, “Pleural effusion” and “Pneumonia”; all P < 0.05). Features from the first CXR of each pair could correctly predict the outcome category between “Worse” and “Improved” cases with 82.7% accuracy.ConclusionCXR deep learning features show promise for classifying the disease trajectory. Once validated in studies incorporating clinical data and with larger sample sizes, this information may be considered to inform triage decisions.


2021 ◽  
Vol 8 ◽  
Author(s):  
Castela Forte ◽  
Andrei Voinea ◽  
Malina Chichirau ◽  
Galiya Yeshmagambetova ◽  
Lea M. Albrecht ◽  
...  

Background: The inclusion of facial and bodily cues (clinical gestalt) in machine learning (ML) models improves the assessment of patients' health status, as shown in genetic syndromes and acute coronary syndrome. It is unknown if the inclusion of clinical gestalt improves ML-based classification of acutely ill patients. As in previous research in ML analysis of medical images, simulated or augmented data may be used to assess the usability of clinical gestalt.Objective: To assess whether a deep learning algorithm trained on a dataset of simulated and augmented facial photographs reflecting acutely ill patients can distinguish between healthy and LPS-infused, acutely ill individuals.Methods: Photographs from twenty-six volunteers whose facial features were manipulated to resemble a state of acute illness were used to extract features of illness and generate a synthetic dataset of acutely ill photographs, using a neural transfer convolutional neural network (NT-CNN) for data augmentation. Then, four distinct CNNs were trained on different parts of the facial photographs and concatenated into one final, stacked CNN which classified individuals as healthy or acutely ill. Finally, the stacked CNN was validated in an external dataset of volunteers injected with lipopolysaccharide (LPS).Results: In the external validation set, the four individual feature models distinguished acutely ill patients with sensitivities ranging from 10.5% (95% CI, 1.3–33.1% for the skin model) to 89.4% (66.9–98.7%, for the nose model). Specificity ranged from 42.1% (20.3–66.5%) for the nose model and 94.7% (73.9–99.9%) for skin. The stacked model combining all four facial features achieved an area under the receiver characteristic operating curve (AUROC) of 0.67 (0.62–0.71) and distinguished acutely ill patients with a sensitivity of 100% (82.35–100.00%) and specificity of 42.11% (20.25–66.50%).Conclusion: A deep learning algorithm trained on a synthetic, augmented dataset of facial photographs distinguished between healthy and simulated acutely ill individuals, demonstrating that synthetically generated data can be used to develop algorithms for health conditions in which large datasets are difficult to obtain. These results support the potential of facial feature analysis algorithms to support the diagnosis of acute illness.


Author(s):  
Mohamed Nadjib Boufenara ◽  
Mahmoud Boufaida ◽  
Mohamed Lamine Berkane

With the exponential growth of biological data, labeling this kind of data becomes difficult and costly. Although unlabeled data are comparatively more plentiful than labeled ones, most supervised learning methods are not designed to use unlabeled data. Semi-supervised learning methods are motivated by the availability of large unlabeled datasets rather than a small amount of labeled examples. However, incorporating unlabeled data into learning does not guarantee an improvement in classification performance. This paper introduces an approach based on a model of semi-supervised learning, which is the self-training with a deep learning algorithm to predict missing classes from labeled and unlabeled data. In order to assess the performance of the proposed approach, two datasets are used with four performance measures: precision, recall, F-measure, and area under the ROC curve (AUC).


2020 ◽  
Vol 4 (12) ◽  
pp. 1197-1207
Author(s):  
Wanshan Ning ◽  
Shijun Lei ◽  
Jingjing Yang ◽  
Yukun Cao ◽  
Peiran Jiang ◽  
...  

AbstractData from patients with coronavirus disease 2019 (COVID-19) are essential for guiding clinical decision making, for furthering the understanding of this viral disease, and for diagnostic modelling. Here, we describe an open resource containing data from 1,521 patients with pneumonia (including COVID-19 pneumonia) consisting of chest computed tomography (CT) images, 130 clinical features (from a range of biochemical and cellular analyses of blood and urine samples) and laboratory-confirmed severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) clinical status. We show the utility of the database for prediction of COVID-19 morbidity and mortality outcomes using a deep learning algorithm trained with data from 1,170 patients and 19,685 manually labelled CT slices. In an independent validation cohort of 351 patients, the algorithm discriminated between negative, mild and severe cases with areas under the receiver operating characteristic curve of 0.944, 0.860 and 0.884, respectively. The open database may have further uses in the diagnosis and management of patients with COVID-19.


2020 ◽  
Author(s):  
Tuan Pham

Chest X-rays have been found to be very promising for assessing COVID-19 patients, especially for resolving emergency-department and urgent-care-center overcapacity. Deep-learning (DL) methods in artificial intelligence (AI) play a dominant role as high-performance classifiers in the detection of the disease using chest X-rays. While many new DL models have been being developed for this purpose, this study aimed to investigate the fine tuning of pretrained convolutional neural networks (CNNs) for the classification of COVID-19 using chest X-rays. Three pretrained CNNs, which are AlexNet, GoogleNet, and SqueezeNet, were selected and fine-tuned without data augmentation to carry out 2-class and 3-class classification tasks using 3 public chest X-ray databases. In comparison with other recently developed DL models, the 3 pretrained CNNs achieved very high classification results in terms of accuracy, sensitivity, specificity, precision, F1 score, and area under the receiver-operating-characteristic curve. AlexNet, GoogleNet, and SqueezeNet require the least training time among pretrained DL models, but with suitable selection of training parameters, excellent classification results can be achieved without data augmentation by these networks. The findings contribute to the urgent need for harnessing the pandemic by facilitating the deployment of AI tools that are fully automated and readily available in the public domain for rapid implementation.


2017 ◽  
Author(s):  
Jie Xie

Acoustic classification of frogs has received increasing attention for its promising application in ecological studies. Various studies have been proposed for classifying frog species, but most recordings are assumed to have only a single species. In this study, a method to classify multiple frog species in an audio clip is presented. To be specific, continuous frog recordings are first cropped into audio clips (10 seconds). Then, various time-frequency representations are generated for each 10-s recording. Next, instead of using traditional hand-crafted features, a deep learning algorithm is used to find the most important feature. Finally, a binary relevance based multi-label classification approach is proposed to classify simultaneously vocalizing frog species with our proposed features. Experimental results show that our proposed features extracted using deep learning can achieve better classification performance when compared to hand-crafted features for frog call classification.


2017 ◽  
Author(s):  
Jie Xie

Acoustic classification of frogs has received increasing attention for its promising application in ecological studies. Various studies have been proposed for classifying frog species, but most recordings are assumed to have only a single species. In this study, a method to classify multiple frog species in an audio clip is presented. To be specific, continuous frog recordings are first cropped into audio clips (10 seconds). Then, various time-frequency representations are generated for each 10-s recording. Next, instead of using traditional hand-crafted features, a deep learning algorithm is used to find the most important feature. Finally, a binary relevance based multi-label classification approach is proposed to classify simultaneously vocalizing frog species with our proposed features. Experimental results show that our proposed features extracted using deep learning can achieve better classification performance when compared to hand-crafted features for frog call classification.


Sign in / Sign up

Export Citation Format

Share Document