scholarly journals Deep Learning-Based Methods for Prostate Segmentation in Magnetic Resonance Imaging

2021 ◽  
Vol 11 (2) ◽  
pp. 782 ◽  
Author(s):  
Albert Comelli ◽  
Navdeep Dahiya ◽  
Alessandro Stefano ◽  
Federica Vernuccio ◽  
Marzia Portoghese ◽  
...  

Magnetic Resonance Imaging-based prostate segmentation is an essential task for adaptive radiotherapy and for radiomics studies whose purpose is to identify associations between imaging features and patient outcomes. Because manual delineation is a time-consuming task, we present three deep-learning (DL) approaches, namely UNet, efficient neural network (ENet), and efficient residual factorized convNet (ERFNet), whose aim is to tackle the fully-automated, real-time, and 3D delineation process of the prostate gland on T2-weighted MRI. While UNet is used in many biomedical image delineation applications, ENet and ERFNet are mainly applied in self-driving cars to compensate for limited hardware availability while still achieving accurate segmentation. We apply these models to a limited set of 85 manual prostate segmentations using the k-fold validation strategy and the Tversky loss function and we compare their results. We find that ENet and UNet are more accurate than ERFNet, with ENet much faster than UNet. Specifically, ENet obtains a dice similarity coefficient of 90.89% and a segmentation time of about 6 s using central processing unit (CPU) hardware to simulate real clinical conditions where graphics processing unit (GPU) is not always available. In conclusion, ENet could be efficiently applied for prostate delineation even in small image training datasets with potential benefit for patient management personalization.

2019 ◽  
Author(s):  
Wei Wang ◽  
Mingang Wang ◽  
Xiaofen Wu ◽  
Xie Ding ◽  
Xuexiang Cao ◽  
...  

Abstract Background: Automatic and detailed segmentation of the prostate using magnetic resonance imaging (MRI) plays an essential role in prostate imaging diagnosis. However, the complexity of the prostate gland hampers accurate segmentation from other tissues. Thus, we propose the automatic prostate segmentation method SegDGAN, which is based on a classic generative adversarial network (GAN) model. Methods: The proposed method comprises a fully convolutional generation network of densely connected blocks and a critic network with multi-scale feature extraction. In these computations, the objective function is optimized using mean absolute error and the Dice coefficient, leading to improved accuracy of segmentation results and correspondence with the ground truth. The common and similar medical image segmentation networks U-Net, fully convolution network, and SegAN were selected for qualitative and quantitative comparisons with SegDGAN using a 220-patient dataset and the publicly available dataset PROMISE12. The commonly used segmentation evaluation metrics Dice similarity coefficient (DSC), volumetric overlap error (VOE), average surface distance (ASD), and Hausdorff distance (HD) were also used to compare the accuracy of segmentation between these methods. Results: SegDGAN achieved the highest DSC value of 91.66%, the lowest VOE value of 23.47% and the lowest ASD values of 0.46 mm with the clinical dataset. In addition, the highest DCS value of 88.69%, the lowest VOE value of 23.47%, the lowest ASD value of 0.83 mm, and the lowest HD value of 11.40 mm was achieved with the PROMISE12 dataset. Conclusions: Our experimental results show that the SegDGAN model outperforms other segmentation methods Keywords: Automatic segmentation, Generative adversarial networks, Magnetic resonance imaging, Prostate


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Deqian Xin ◽  
Zhongzhe An ◽  
Juan Ding ◽  
Zhi Li ◽  
Leyan Qiao

This study aimed to explore the value of magnetic resonance imaging (MRI) features based on deep learning super-resolution algorithms in evaluating the value of propofol anesthesia for brain protection of patients undergoing craniotomy evacuation of the hematoma. An optimized super-resolution algorithm was obtained through the multiscale network reconstruction model based on the traditional algorithm. A total of 100 patients undergoing craniotomy evacuation of hematoma were recruited and rolled into sevoflurane control group and propofol experimental group. Both were evaluated using diffusion tensor imaging (DTI) images based on deep learning super-resolution algorithms. The results showed that the fractional anisotropic image (FA) value of the hind limb corticospinal tract of the affected side of the internal capsule of the experimental group after the operation was 0.67 ± 0.28. The National Institute of Health Stroke Scale (NIHSS) score was 6.14 ± 3.29. The oxygen saturation in jugular venous (SjvO2) at T4 and T5 was 61.93 ± 6.58% and 59.38 ± 6.2%, respectively, and cerebral oxygen uptake rate (CO2ER) was 31.12 ± 6.07% and 35.83 ± 7.91%, respectively. The difference in jugular venous oxygen (Da-jvO2) at T3, T4, and T5 was 63.28 ± 10.15 mL/dL, 64.89 ± 13.11 mL/dL, and 66.03 ± 11.78 mL/dL, respectively. The neuron-specific enolase (NSE) and central-nerve-specific protein (S100β) levels at T5 were 53.85 ± 12.31 ng/mL and 7.49 ± 3.16 ng/mL, respectively. In terms of the number of postoperative complications, the patients in the experimental group were better than the control group under sevoflurane anesthesia, and the differences were substantial ( P  < 0.05). In conclusion, MRI images based on deep learning super-resolution algorithm have great clinical value in evaluating the degree of brain injury in patients anesthetized with propofol and the protective effect of propofol on brain nerves.


2018 ◽  
Vol 2018 ◽  
pp. 1-7 ◽  
Author(s):  
Qiaoliang Li ◽  
Yuzhen Xu ◽  
Zhewei Chen ◽  
Dexiang Liu ◽  
Shi-Ting Feng ◽  
...  

Objectives. To evaluate the application of a deep learning architecture, based on the convolutional neural network (CNN) technique, to perform automatic tumor segmentation of magnetic resonance imaging (MRI) for nasopharyngeal carcinoma (NPC). Materials and Methods. In this prospective study, 87 MRI containing tumor regions were acquired from newly diagnosed NPC patients. These 87 MRI were augmented to >60,000 images. The proposed CNN network is composed of two phases: feature representation and scores map reconstruction. We designed a stepwise scheme to train our CNN network. To evaluate the performance of our method, we used case-by-case leave-one-out cross-validation (LOOCV). The ground truth of tumor contouring was acquired by the consensus of two experienced radiologists. Results. The mean values of dice similarity coefficient, percent match, and their corresponding ratio with our method were 0.89±0.05, 0.90±0.04, and 0.84±0.06, respectively, all of which were better than reported values in the similar studies. Conclusions. We successfully established a segmentation method for NPC based on deep learning in contrast-enhanced magnetic resonance imaging. Further clinical trials with dedicated algorithms are warranted.


2020 ◽  
Author(s):  
Wei Wang ◽  
Mingang Wang ◽  
Xiaofen Wu ◽  
Xie Ding ◽  
Xuexiang Cao ◽  
...  

Abstract Background: Automatic and detailed segmentation of the prostate using magnetic resonance imaging (MRI) plays an essential role in prostate imaging diagnosis. However, the complexity of the prostate gland hampers accurate segmentation from other tissues. Thus, we propose the automatic prostate segmentation method SegDGAN, which is based on a classic generative adversarial network (GAN) model. Methods: The proposed method comprises a fully convolutional generation network of densely connected blocks and a critic network with multi-scale feature extraction. In these computations, the objective function is optimized using mean absolute error and the Dice coefficient, leading to improved accuracy of segmentation results and correspondence with the ground truth. The common and similar medical image segmentation networks U-Net, fully convolution network, and SegAN were selected for qualitative and quantitative comparisons with SegDGAN using a 220-patient dataset and the publicly available dataset PROMISE12. The commonly used segmentation evaluation metrics Dice similarity coefficient (DSC), volumetric overlap error (VOE), average surface distance (ASD), and Hausdorff distance (HD) were also used to compare the accuracy of segmentation between these methods. Results: SegDGAN achieved the highest DSC value of 91.66%, the lowest VOE value of 23.47% and the lowest ASD values of 0.46 mm with the clinical dataset. In addition, the highest DCS value of 88.69%, the lowest VOE value of 23.47%, the lowest ASD value of 0.83 mm, and the lowest HD value of 11.40 mm was achieved with the PROMISE12 dataset. Conclusions: Our experimental results show that the SegDGAN model outperforms other segmentation methods Keywords: Automatic segmentation, Generative adversarial networks, Magnetic resonance imaging, Prostate


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2709
Author(s):  
Chih-Ching Lai ◽  
Hsin-Kai Wang ◽  
Fu-Nien Wang ◽  
Yu-Ching Peng ◽  
Tzu-Ping Lin ◽  
...  

The accuracy in diagnosing prostate cancer (PCa) has increased with the development of multiparametric magnetic resonance imaging (mpMRI). Biparametric magnetic resonance imaging (bpMRI) was found to have a diagnostic accuracy comparable to mpMRI in detecting PCa. However, prostate MRI assessment relies on human experts and specialized training with considerable inter-reader variability. Deep learning may be a more robust approach for prostate MRI assessment. Here we present a method for autosegmenting the prostate zone and cancer region by using SegNet, a deep convolution neural network (DCNN) model. We used PROSTATEx dataset to train the model and combined different sequences into three channels of a single image. For each subject, all slices that contained the transition zone (TZ), peripheral zone (PZ), and PCa region were selected. The datasets were produced using different combinations of images, including T2-weighted (T2W) images, diffusion-weighted images (DWI) and apparent diffusion coefficient (ADC) images. Among these groups, the T2W + DWI + ADC images exhibited the best performance with a dice similarity coefficient of 90.45% for the TZ, 70.04% for the PZ, and 52.73% for the PCa region. Image sequence analysis with a DCNN model has the potential to assist PCa diagnosis.


2012 ◽  
Vol 12 (5) ◽  
pp. 331-339 ◽  
Author(s):  
Melania Costantini ◽  
Paolo Belli ◽  
Daniela Distefano ◽  
Enida Bufi ◽  
Marialuisa Di Matteo ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document