scholarly journals Deep Learning Based Cardiac MRI Segmentation: Do We Need Experts?

Algorithms ◽  
2021 ◽  
Vol 14 (7) ◽  
pp. 212
Author(s):  
Youssef Skandarani ◽  
Pierre-Marc Jodoin ◽  
Alain Lalande

Deep learning methods are the de facto solutions to a multitude of medical image analysis tasks. Cardiac MRI segmentation is one such application, which, like many others, requires a large number of annotated data so that a trained network can generalize well. Unfortunately, the process of having a large number of manually curated images by medical experts is both slow and utterly expensive. In this paper, we set out to explore whether expert knowledge is a strict requirement for the creation of annotated data sets on which machine learning can successfully be trained. To do so, we gauged the performance of three segmentation models, namely U-Net, Attention U-Net, and ENet, trained with different loss functions on expert and non-expert ground truth for cardiac cine–MRI segmentation. Evaluation was done with classic segmentation metrics (Dice index and Hausdorff distance) as well as clinical measurements, such as the ventricular ejection fractions and the myocardial mass. The results reveal that generalization performances of a segmentation neural network trained on non-expert ground truth data is, to all practical purposes, as good as that trained on expert ground truth data, particularly when the non-expert receives a decent level of training, highlighting an opportunity for the efficient and cost-effective creation of annotations for cardiac data sets.

2019 ◽  
Vol 13 (1) ◽  
pp. 120-126
Author(s):  
K. Bhavanishankar ◽  
M. V. Sudhamani

Objective: Lung cancer is proving to be one of the deadliest diseases that is haunting mankind in recent years. Timely detection of the lung nodules would surely enhance the survival rate. This paper focusses on the classification of candidate lung nodules into nodules/non-nodules in a CT scan of the patient. A deep learning approach –autoencoder is used for the classification. Investigation/Methodology: Candidate lung nodule patches obtained as the results of the lung segmentation are considered as input to the autoencoder model. The ground truth data from the LIDC repository is prepared and is submitted to the autoencoder training module. After a series of experiments, it is decided to use 4-stacked autoencoder. The model is trained for over 600 LIDC cases and the trained module is tested for remaining data sets. Results: The results of the classification are evaluated with respect to performance measures such as sensitivity, specificity, and accuracy. The results obtained are also compared with other related works and the proposed approach was found to be better by 6.2% with respect to accuracy. Conclusion: In this paper, a deep learning approach –autoencoder has been used for the classification of candidate lung nodules into nodules/non-nodules. The performance of the proposed approach was evaluated with respect to sensitivity, specificity, and accuracy and the obtained values are 82.6%, 91.3%, and 87.0%, respectively. This result is then compared with existing related works and an improvement of 6.2% with respect to accuracy has been observed.


Stroke ◽  
2020 ◽  
Vol 51 (Suppl_1) ◽  
Author(s):  
Benjamin Zahneisen ◽  
Matus Straka ◽  
Shalini Bammer ◽  
Greg Albers ◽  
Roland Bammer

Introduction: Ruling out hemorrhage (stroke or traumatic) prior to administration of thrombolytics is critical for Code Strokes. A triage software that identifies hemorrhages on head CTs and alerts radiologists would help to streamline patient care and increase diagnostic confidence and patient safety. ML approach: We trained a deep convolutional network with a hybrid 3D/2D architecture on unenhanced head CTs of 805 patients. Our training dataset comprised 348 positive hemorrhage cases (IPH=245, SAH=67, Sub/Epi-dural=70, IVH=83) (128 female) and 457 normal controls (217 female). Lesion outlines were drawn by experts and stored as binary masks that were used as ground truth data during the training phase (random 80/20 train/test split). Diagnostic sensitivity and specificity were defined on a per patient study level, i.e. a single, binary decision for presence/absence of a hemorrhage on a patient’s CT scan. Final validation was performed in 380 patients (167 positive). Tool: The hemorrhage detection module was prototyped in Python/Keras. It runs on a local LINUX server (4 CPUs, no GPUs) and is embedded in a larger image processing platform dedicated to stroke. Results: Processing time for a standard whole brain CT study (3-5mm slices) was around 2min. Upon completion, an instant notification (by email and/or mobile app) was sent to users to alert them about the suspected presence of a hemorrhage. Relative to neuroradiologist gold standard reads the algorithm’s sensitivity and specificity is 90.4% and 92.5% (95% CI: 85%-94% for both). Detection of acute intracranial hemorrhage can be automatized by deploying deep learning. It yielded very high sensitivity/specificity when compared to gold standard reads by a neuroradiologist. Volumes as small as 0.5mL could be detected reliably in the test dataset. The software can be deployed in busy practices to prioritize worklists and alert health care professionals to speed up therapeutic decision processes and interventions.


Author(s):  
N. Soyama ◽  
K. Muramatsu ◽  
M. Daigo ◽  
F. Ochiai ◽  
N. Fujiwara

Validating the accuracy of land cover products using a reliable reference dataset is an important task. A reliable reference dataset is produced with information derived from ground truth data. Recently, the amount of ground truth data derived from information collected by volunteers has been increasing globally. The acquisition of volunteer-based reference data demonstrates great potential. However information given by volunteers is limited useful vegetation information to produce a complete reference dataset based on the plant functional type (PFT) with five specialized forest classes. In this study, we examined the availability and applicability of FLUXNET information to produce reference data with higher levels of reliability. FLUXNET information was useful especially for forest classes for interpretation in comparison with the reference dataset using information given by volunteers.


Author(s):  
Johannes Thomsen ◽  
Magnus B. Sletfjerding ◽  
Stefano Stella ◽  
Bijoya Paul ◽  
Simon Bo Jensen ◽  
...  

AbstractSingle molecule Förster Resonance energy transfer (smFRET) is a mature and adaptable method for studying the structure of biomolecules and integrating their dynamics into structural biology. The development of high throughput methodologies and the growth of commercial instrumentation have outpaced the development of rapid, standardized, and fully automated methodologies to objectively analyze the wealth of produced data. Here we present DeepFRET, an automated standalone solution based on deep learning, where the only crucial human intervention in transiting from raw microscope images to histogram of biomolecule behavior, is a user-adjustable quality threshold. Integrating all standard features of smFRET analysis, DeepFRET will consequently output common kinetic information metrics for biomolecules. We validated the utility of DeepFRET by performing quantitative analysis on simulated, ground truth, data and real smFRET data. The accuracy of classification by DeepFRET outperformed human operators and current commonly used hard threshold and reached >95% precision accuracy only requiring a fraction of the time (<1% as compared to human operators) on ground truth data. Its flawless and rapid operation on real data demonstrates its wide applicability. This level of classification was achieved without any preprocessing or parameter setting by human operators, demonstrating DeepFRET’s capacity to objectively quantify biomolecular dynamics. The provided a standalone executable based on open source code capitalises on the widespread adaptation of machine learning and may contribute to the effort of benchmarking smFRET for structural biology insights.


2020 ◽  
Vol 12 (1) ◽  
pp. 9-12
Author(s):  
Arjun G. Koppad ◽  
Syeda Sarfin ◽  
Anup Kumar Das

The study has been conducted for land use and land cover classification by using SAR data. The study included examining of ALOS 2 PALSAR L- band quad pol (HH, HV, VH and VV) SAR data for LULC classification. The SAR data was pre-processed first which included multilook, radiometric calibration, geometric correction, speckle filtering, SAR Polarimetry and decomposition. For land use land cover classification of ALOS-2-PALSAR data sets, the supervised Random forest classifier was used. Training samples were selected with the help of ground truth data. The area was classified under 7 different classes such as dense forest, moderate dense forest, scrub/sparse forest, plantation, agriculture, water body, and settlements. Among them the highest area was covered by dense forest (108647ha) followed by horticulture plantation (57822 ha) and scrub/Sparse forest (49238 ha) and lowest area was covered by moderate dense forest (11589 ha).   Accuracy assessment was performed after classification. The overall accuracy of SAR data was 80.36% and Kappa Coefficient was 0.76.  Based on SAR backscatter reflectance such as single, double, and volumetric scattering mechanism different land use classes were identified.


2016 ◽  
Author(s):  
Roshni Cooper ◽  
Shaul Yogev ◽  
Kang Shen ◽  
Mark Horowitz

AbstractMotivation:Microtubules (MTs) are polarized polymers that are critical for cell structure and axonal transport. They form a bundle in neurons, but beyond that, their organization is relatively unstudied.Results:We present MTQuant, a method for quantifying MT organization using light microscopy, which distills three parameters from MT images: the spacing of MT minus-ends, their average length, and the average number of MTs in a cross-section of the bundle. This method allows for robust and rapid in vivo analysis of MTs, rendering it more practical and more widely applicable than commonly-used electron microscopy reconstructions. MTQuant was successfully validated with three ground truth data sets and applied to over 3000 images of MTs in a C. elegans motor neuron.Availability:MATLAB code is available at http://roscoope.github.io/MTQuantContact:[email protected] informationSupplementary data are available at Bioinformatics online.


Author(s):  
Jieyu Li ◽  
Jayaram K. Udupa ◽  
Yubing Tong ◽  
Lisheng Wang ◽  
Drew A. Torigian

2020 ◽  
Vol 36 (12) ◽  
pp. 3863-3870
Author(s):  
Mischa Schwendy ◽  
Ronald E Unger ◽  
Sapun H Parekh

Abstract Motivation Deep learning use for quantitative image analysis is exponentially increasing. However, training accurate, widely deployable deep learning algorithms requires a plethora of annotated (ground truth) data. Image collections must contain not only thousands of images to provide sufficient example objects (i.e. cells), but also contain an adequate degree of image heterogeneity. Results We present a new dataset, EVICAN—Expert visual cell annotation, comprising partially annotated grayscale images of 30 different cell lines from multiple microscopes, contrast mechanisms and magnifications that is readily usable as training data for computer vision applications. With 4600 images and ∼26 000 segmented cells, our collection offers an unparalleled heterogeneous training dataset for cell biology deep learning application development. Availability and implementation The dataset is freely available (https://edmond.mpdl.mpg.de/imeji/collection/l45s16atmi6Aa4sI?q=). Using a Mask R-CNN implementation, we demonstrate automated segmentation of cells and nuclei from brightfield images with a mean average precision of 61.6 % at a Jaccard Index above 0.5.


Sign in / Sign up

Export Citation Format

Share Document