scholarly journals The Reproducibility of Deep Learning-Based Segmentation of the Prostate Gland and Zones on T2-Weighted MR Images

Diagnostics ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1690
Author(s):  
Mohammed R. S. Sunoqrot ◽  
Kirsten M. Selnæs ◽  
Elise Sandsmark ◽  
Sverre Langørgen ◽  
Helena Bertilsson ◽  
...  

Volume of interest segmentation is an essential step in computer-aided detection and diagnosis (CAD) systems. Deep learning (DL)-based methods provide good performance for prostate segmentation, but little is known about the reproducibility of these methods. In this work, an in-house collected dataset from 244 patients was used to investigate the intra-patient reproducibility of 14 shape features for DL-based segmentation methods of the whole prostate gland (WP), peripheral zone (PZ), and the remaining prostate zones (non-PZ) on T2-weighted (T2W) magnetic resonance (MR) images compared to manual segmentations. The DL-based segmentation was performed using three different convolutional neural networks (CNNs): V-Net, nnU-Net-2D, and nnU-Net-3D. The two-way random, single score intra-class correlation coefficient (ICC) was used to measure the inter-scan reproducibility of each feature for each CNN and the manual segmentation. We found that the reproducibility of the investigated methods is comparable to manual for all CNNs (14/14 features), except for V-Net in PZ (7/14 features). The ICC score for segmentation volume was found to be 0.888, 0.607, 0.819, and 0.903 in PZ; 0.988, 0.967, 0.986, and 0.983 in non-PZ; 0.982, 0.975, 0.973, and 0.984 in WP for manual, V-Net, nnU-Net-2D, and nnU-Net-3D, respectively. The results of this work show the feasibility of embedding DL-based segmentation in CAD systems, based on multiple T2W MR scans of the prostate, which is an important step towards the clinical implementation.

Author(s):  
Y. Yuan ◽  
W. Qin ◽  
M.K. Buyyounouski ◽  
S.L. Hancock ◽  
H.P. Bagshaw ◽  
...  

2021 ◽  
Vol 11 (2) ◽  
pp. 782 ◽  
Author(s):  
Albert Comelli ◽  
Navdeep Dahiya ◽  
Alessandro Stefano ◽  
Federica Vernuccio ◽  
Marzia Portoghese ◽  
...  

Magnetic Resonance Imaging-based prostate segmentation is an essential task for adaptive radiotherapy and for radiomics studies whose purpose is to identify associations between imaging features and patient outcomes. Because manual delineation is a time-consuming task, we present three deep-learning (DL) approaches, namely UNet, efficient neural network (ENet), and efficient residual factorized convNet (ERFNet), whose aim is to tackle the fully-automated, real-time, and 3D delineation process of the prostate gland on T2-weighted MRI. While UNet is used in many biomedical image delineation applications, ENet and ERFNet are mainly applied in self-driving cars to compensate for limited hardware availability while still achieving accurate segmentation. We apply these models to a limited set of 85 manual prostate segmentations using the k-fold validation strategy and the Tversky loss function and we compare their results. We find that ENet and UNet are more accurate than ERFNet, with ENet much faster than UNet. Specifically, ENet obtains a dice similarity coefficient of 90.89% and a segmentation time of about 6 s using central processing unit (CPU) hardware to simulate real clinical conditions where graphics processing unit (GPU) is not always available. In conclusion, ENet could be efficiently applied for prostate delineation even in small image training datasets with potential benefit for patient management personalization.


2021 ◽  
Vol 3 (3) ◽  
pp. e200024
Author(s):  
Michelle Bardis ◽  
Roozbeh Houshyar ◽  
Chanon Chantaduly ◽  
Karen Tran-Harding ◽  
Alexander Ushinsky ◽  
...  

2021 ◽  
Vol 11 (10) ◽  
pp. 4573
Author(s):  
Mehmet A. Gulum ◽  
Christopher M. Trombley ◽  
Mehmed Kantardzic

Deep learning has demonstrated remarkable accuracy analyzing images for cancer detection tasks in recent years. The accuracy that has been achieved rivals radiologists and is suitable for implementation as a clinical tool. However, a significant problem is that these models are black-box algorithms therefore they are intrinsically unexplainable. This creates a barrier for clinical implementation due to lack of trust and transparency that is a characteristic of black box algorithms. Additionally, recent regulations prevent the implementation of unexplainable models in clinical settings which further demonstrates a need for explainability. To mitigate these concerns, there have been recent studies that attempt to overcome these issues by modifying deep learning architectures or providing after-the-fact explanations. A review of the deep learning explanation literature focused on cancer detection using MR images is presented here. The gap between what clinicians deem explainable and what current methods provide is discussed and future suggestions to close this gap are provided.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Huanyu Liu ◽  
Jiaqi Liu ◽  
Junbao Li ◽  
Jeng-Shyang Pan ◽  
Xiaqiong Yu

Magnetic resonance imaging (MRI) is widely used in the detection and diagnosis of diseases. High-resolution MR images will help doctors to locate lesions and diagnose diseases. However, the acquisition of high-resolution MR images requires high magnetic field intensity and long scanning time, which will bring discomfort to patients and easily introduce motion artifacts, resulting in image quality degradation. Therefore, the resolution of hardware imaging has reached its limit. Based on this situation, a unified framework based on deep learning super resolution is proposed to transfer state-of-the-art deep learning methods of natural images to MRI super resolution. Compared with the traditional image super-resolution method, the deep learning super-resolution method has stronger feature extraction and characterization ability, can learn prior knowledge from a large number of sample data, and has a more stable and excellent image reconstruction effect. We propose a unified framework of deep learning -based MRI super resolution, which has five current deep learning methods with the best super-resolution effect. In addition, a high-low resolution MR image dataset with the scales of ×2, ×3, and ×4 was constructed, covering 4 parts of the skull, knee, breast, and head and neck. Experimental results show that the proposed unified framework of deep learning super resolution has a better reconstruction effect on the data than traditional methods and provides a standard dataset and experimental benchmark for the application of deep learning super resolution in MR images.


2020 ◽  
Author(s):  
Sahil S. Nalawade ◽  
Fang F. Yu ◽  
Chandan Ganesh Bangalore Yogananda ◽  
Gowtham K. Murugesan ◽  
Bhavya R. Shah ◽  
...  

AbstractDeep learning has shown promise for predicting glioma molecular profiles using MR images. Before clinical implementation, ensuring robustness to real-world problems, such as patient motion, is crucial. We sought to evaluate the effects of motion artifact on glioma marker classifier performance and develop a deep learning motion correction network to restore classification accuracies. T2w images and molecular information were retrieved from the TCIA and TCGA databases. Three-fold cross-validation was used to train and test the motion correction network on artifact-corrupted images. We then compared the performance of three glioma marker classifiers (IDH mutation, 1p/19q codeletion, and MGMT methylation) using motion-corrupted and motion-corrected images. Glioma marker classifier performance decreased markedly with increasing motion corruption. Applying motion correction effectively restored classification accuracy for even the most motion-corrupted images. For IDH classification, an accuracy of 99% was achieved, representing a new benchmark in non-invasive image-based IDH classification and exceeding the original performance of the network. Robust motion correction can enable high accuracy in deep learning MRI-based molecular marker classification rivaling tissue-based characterization.STATEMENT OF SIGNIFICANCEDeep learning networks have shown promise for predicting molecular profiles of gliomas using MR images. We demonstrate that patient motion artifact, which is frequently encountered in the clinic, can significantly impair the performance of these algorithms. The application of robust motion correction algorithms can restore the performance of these networks, rivaling tissue-based characterization.


Author(s):  
Renato Cuocolo ◽  
Albert Comelli ◽  
Alessandro Stefano ◽  
Viviana Benfante ◽  
Navdeep Dahiya ◽  
...  

Author(s):  
Yongfeng Gao ◽  
Jiaxing Tan ◽  
Zhengrong Liang ◽  
Lihong Li ◽  
Yumei Huo

AbstractComputer aided detection (CADe) of pulmonary nodules plays an important role in assisting radiologists’ diagnosis and alleviating interpretation burden for lung cancer. Current CADe systems, aiming at simulating radiologists’ examination procedure, are built upon computer tomography (CT) images with feature extraction for detection and diagnosis. Human visual perception in CT image is reconstructed from sinogram, which is the original raw data acquired from CT scanner. In this work, different from the conventional image based CADe system, we propose a novel sinogram based CADe system in which the full projection information is used to explore additional effective features of nodules in the sinogram domain. Facing the challenges of limited research in this concept and unknown effective features in the sinogram domain, we design a new CADe system that utilizes the self-learning power of the convolutional neural network to learn and extract effective features from sinogram. The proposed system was validated on 208 patient cases from the publicly available online Lung Image Database Consortium database, with each case having at least one juxtapleural nodule annotation. Experimental results demonstrated that our proposed method obtained a value of 0.91 of the area under the curve (AUC) of receiver operating characteristic based on sinogram alone, comparing to 0.89 based on CT image alone. Moreover, a combination of sinogram and CT image could further improve the value of AUC to 0.92. This study indicates that pulmonary nodule detection in the sinogram domain is feasible with deep learning.


Sign in / Sign up

Export Citation Format

Share Document