Using Deep Learning to Classify Burnt Body Parts Images for Better Burns Diagnosis

Author(s):  
Joohi Chauhan ◽  
Rahul Goswami ◽  
Puneet Goyal
Keyword(s):  
Author(s):  
Hrishikesh Deshpande ◽  
Martin Bergtholdt ◽  
Shlomo Gotman ◽  
Axel Saalbach ◽  
Julien Senegas
Keyword(s):  

2019 ◽  
Author(s):  
Jacob M. Graving ◽  
Daniel Chae ◽  
Hemal Naik ◽  
Liang Li ◽  
Benjamin Koger ◽  
...  

AbstractQuantitative behavioral measurements are important for answering questions across scientific disciplines—from neuroscience to ecology. State-of-the-art deep-learning methods offer major advances in data quality and detail by allowing researchers to automatically estimate locations of an animal’s body parts directly from images or videos. However, currently-available animal pose estimation methods have limitations in speed and robustness. Here we introduce a new easy-to-use software toolkit,DeepPoseKit, that addresses these problems using an eZcient multi-scale deep-learning model, calledStacked DenseNet, and a fast GPU-based peak-detection algorithm for estimating keypoint locations with subpixel precision. These advances improve processing speed >2× with no loss in accuracy compared to currently-available methods. We demonstrate the versatility of our methods with multiple challenging animal pose estimation tasks in laboratory and field settings—including groups of interacting individuals. Our work reduces barriers to using advanced tools for measuring behavior and has broad applicability across the behavioral sciences.


2021 ◽  
Author(s):  
Hieu H. Pham ◽  
Dung V. Do ◽  
Ha Q. Nguyen

AbstractX-ray imaging in Digital Imaging and Communications in Medicine (DICOM) format is the most commonly used imaging modality in clinical practice, resulting in vast, non-normalized databases. This leads to an obstacle in deploying artificial intelligence (AI) solutions for analyzing medical images, which often requires identifying the right body part before feeding the image into a specified AI model. This challenge raises the need for an automated and efficient approach to classifying body parts from X-ray scans. Unfortunately, to the best of our knowledge, there is no open tool or framework for this task to date. To fill this lack, we introduce a DICOM Imaging Router that deploys deep convolutional neural networks (CNNs) for categorizing unknown DICOM X-ray images into five anatomical groups: abdominal, adult chest, pediatric chest, spine, and others. To this end, a large-scale X-ray dataset consisting of 16,093 images has been collected and manually classified. We then trained a set of state-of-the-art deep CNNs using a training set of 11,263 images. These networks were then evaluated on an independent test set of 2,419 images and showed superior performance in classifying the body parts. Specifically, our best performing model (i.e., MobileNet-V1) achieved a recall of 0.982 (95% CI, 0.977– 0.988), a precision of 0.985 (95% CI, 0.975–0.989) and a F1-score of 0.981 (95% CI, 0.976–0.987), whilst requiring less computation for inference (0.0295 second per image). Our external validity on 1,000 X-ray images shows the robustness of the proposed approach across hospitals. These remarkable performances indicate that deep CNNs can accurately and effectively differentiate human body parts from X-ray scans, thereby providing potential benefits for a wide range of applications in clinical settings. The dataset, codes, and trained deep learning models from this study will be made publicly available on our project website at https://vindr.ai/datasets/bodypartxr.


2021 ◽  
Author(s):  
Miniar Ben Gamra ◽  
Moulay A. Akhloufi ◽  
Chunpu Wang ◽  
Shuo Liu
Keyword(s):  

2020 ◽  
Vol 28 (6) ◽  
pp. 1199-1206
Author(s):  
Kohei Fujiwara ◽  
Wanxuan Fang ◽  
Taichi Okino ◽  
Kenneth Sutherland ◽  
Akira Furusaki ◽  
...  

BACKGROUND: Although rheumatoid arthritis (RA) causes destruction of articular cartilage, early treatment significantly improves symptoms and delays progression. It is important to detect subtle damage for an early diagnosis. Recent software programs are comparable with the conventional human scoring method regarding detectability of the radiographic progression of RA. Thus, automatic and accurate selection of relevant images (e.g. hand images) among radiographic images of various body parts is necessary for serial analysis on a large scale. OBJECTIVE: In this study we examined whether deep learning can select target images from a large number of stored images retrieved from a picture archiving and communication system (PACS) including miscellaneous body parts of patients. METHODS: We selected 1,047 X-ray images including various body parts and divided them into two groups: 841 images for training and 206 images for testing. The training images were augmented and used to train a convolutional neural network (CNN) consisting of 4 convolution layers, 2 pooling layers and 2 fully connected layers. After training, we created software to classify the test images and examined the accuracy. RESULTS: The image extraction accuracy was 0.952 and 0.979 for unilateral hand and both hands, respectively. In addition, all 206 test images were perfectly classified into unilateral hand, both hands, and the others. CONCLUSIONS: Deep learning showed promise to enable efficiently automatic selection of target X-ray images of RA patients.


2020 ◽  
Vol 45 (11) ◽  
pp. 1942-1952 ◽  
Author(s):  
Oliver Sturman ◽  
Lukas von Ziegler ◽  
Christa Schläppi ◽  
Furkan Akyol ◽  
Mattia Privitera ◽  
...  

Abstract To study brain function, preclinical research heavily relies on animal monitoring and the subsequent analyses of behavior. Commercial platforms have enabled semi high-throughput behavioral analyses by automating animal tracking, yet they poorly recognize ethologically relevant behaviors and lack the flexibility to be employed in variable testing environments. Critical advances based on deep-learning and machine vision over the last couple of years now enable markerless tracking of individual body parts of freely moving rodents with high precision. Here, we compare the performance of commercially available platforms (EthoVision XT14, Noldus; TSE Multi-Conditioning System, TSE Systems) to cross-verified human annotation. We provide a set of videos—carefully annotated by several human raters—of three widely used behavioral tests (open field test, elevated plus maze, forced swim test). Using these data, we then deployed the pose estimation software DeepLabCut to extract skeletal mouse representations. Using simple post-analyses, we were able to track animals based on their skeletal representation in a range of classic behavioral tests at similar or greater accuracy than commercial behavioral tracking systems. We then developed supervised machine learning classifiers that integrate the skeletal representation with the manual annotations. This new combined approach allows us to score ethologically relevant behaviors with similar accuracy to humans, the current gold standard, while outperforming commercial solutions. Finally, we show that the resulting machine learning approach eliminates variation both within and between human annotators. In summary, our approach helps to improve the quality and accuracy of behavioral data, while outperforming commercial systems at a fraction of the cost.


2018 ◽  
Vol 226 ◽  
pp. 05003
Author(s):  
Alexander V. Fisunov ◽  
Victoria B. Gnezdilova ◽  
Vladimir I. Marchuk

This paper proposes the method for real time determining three-dimensional coordinates of human body parts from RGB-D stream. Proposed method represent a combined solution in which deep learning and depth map analysis are used.


2018 ◽  
Vol 21 (9) ◽  
pp. 1281-1289 ◽  
Author(s):  
Alexander Mathis ◽  
Pranav Mamidanna ◽  
Kevin M. Cury ◽  
Taiga Abe ◽  
Venkatesh N. Murthy ◽  
...  

eLife ◽  
2019 ◽  
Vol 8 ◽  
Author(s):  
Jacob M Graving ◽  
Daniel Chae ◽  
Hemal Naik ◽  
Liang Li ◽  
Benjamin Koger ◽  
...  

Quantitative behavioral measurements are important for answering questions across scientific disciplines—from neuroscience to ecology. State-of-the-art deep-learning methods offer major advances in data quality and detail by allowing researchers to automatically estimate locations of an animal’s body parts directly from images or videos. However, currently available animal pose estimation methods have limitations in speed and robustness. Here, we introduce a new easy-to-use software toolkit, DeepPoseKit, that addresses these problems using an efficient multi-scale deep-learning model, called Stacked DenseNet, and a fast GPU-based peak-detection algorithm for estimating keypoint locations with subpixel precision. These advances improve processing speed >2x with no loss in accuracy compared to currently available methods. We demonstrate the versatility of our methods with multiple challenging animal pose estimation tasks in laboratory and field settings—including groups of interacting individuals. Our work reduces barriers to using advanced tools for measuring behavior and has broad applicability across the behavioral sciences.


Sign in / Sign up

Export Citation Format

Share Document