scholarly journals Generating synthetic CTs from magnetic resonance images using generative adversarial networks

2018 ◽  
Vol 45 (8) ◽  
pp. 3627-3636 ◽  
Author(s):  
Hajar Emami ◽  
Ming Dong ◽  
Siamak P. Nejad-Davarani ◽  
Carri K. Glide-Hurst
2019 ◽  
Author(s):  
Wei Wang ◽  
Mingang Wang ◽  
Xiaofen Wu ◽  
Xie Ding ◽  
Xuexiang Cao ◽  
...  

Abstract Background: Automatic and detailed segmentation of the prostate using magnetic resonance imaging (MRI) plays an essential role in prostate imaging diagnosis. However, the complexity of the prostate gland hampers accurate segmentation from other tissues. Thus, we propose the automatic prostate segmentation method SegDGAN, which is based on a classic generative adversarial network (GAN) model. Methods: The proposed method comprises a fully convolutional generation network of densely connected blocks and a critic network with multi-scale feature extraction. In these computations, the objective function is optimized using mean absolute error and the Dice coefficient, leading to improved accuracy of segmentation results and correspondence with the ground truth. The common and similar medical image segmentation networks U-Net, fully convolution network, and SegAN were selected for qualitative and quantitative comparisons with SegDGAN using a 220-patient dataset and the publicly available dataset PROMISE12. The commonly used segmentation evaluation metrics Dice similarity coefficient (DSC), volumetric overlap error (VOE), average surface distance (ASD), and Hausdorff distance (HD) were also used to compare the accuracy of segmentation between these methods. Results: SegDGAN achieved the highest DSC value of 91.66%, the lowest VOE value of 23.47% and the lowest ASD values of 0.46 mm with the clinical dataset. In addition, the highest DCS value of 88.69%, the lowest VOE value of 23.47%, the lowest ASD value of 0.83 mm, and the lowest HD value of 11.40 mm was achieved with the PROMISE12 dataset. Conclusions: Our experimental results show that the SegDGAN model outperforms other segmentation methods Keywords: Automatic segmentation, Generative adversarial networks, Magnetic resonance imaging, Prostate


2020 ◽  
Author(s):  
Wei Wang ◽  
Mingang Wang ◽  
Xiaofen Wu ◽  
Xie Ding ◽  
Xuexiang Cao ◽  
...  

Abstract Background: Automatic and detailed segmentation of the prostate using magnetic resonance imaging (MRI) plays an essential role in prostate imaging diagnosis. However, the complexity of the prostate gland hampers accurate segmentation from other tissues. Thus, we propose the automatic prostate segmentation method SegDGAN, which is based on a classic generative adversarial network (GAN) model. Methods: The proposed method comprises a fully convolutional generation network of densely connected blocks and a critic network with multi-scale feature extraction. In these computations, the objective function is optimized using mean absolute error and the Dice coefficient, leading to improved accuracy of segmentation results and correspondence with the ground truth. The common and similar medical image segmentation networks U-Net, fully convolution network, and SegAN were selected for qualitative and quantitative comparisons with SegDGAN using a 220-patient dataset and the publicly available dataset PROMISE12. The commonly used segmentation evaluation metrics Dice similarity coefficient (DSC), volumetric overlap error (VOE), average surface distance (ASD), and Hausdorff distance (HD) were also used to compare the accuracy of segmentation between these methods. Results: SegDGAN achieved the highest DSC value of 91.66%, the lowest VOE value of 23.47% and the lowest ASD values of 0.46 mm with the clinical dataset. In addition, the highest DCS value of 88.69%, the lowest VOE value of 23.47%, the lowest ASD value of 0.83 mm, and the lowest HD value of 11.40 mm was achieved with the PROMISE12 dataset. Conclusions: Our experimental results show that the SegDGAN model outperforms other segmentation methods Keywords: Automatic segmentation, Generative adversarial networks, Magnetic resonance imaging, Prostate


2021 ◽  
Vol 70 ◽  
pp. 1-9
Author(s):  
Wei Wang ◽  
Gangmin Wang ◽  
Xiaofen Wu ◽  
Xie Ding ◽  
Xuexiang Cao ◽  
...  

2020 ◽  
Author(s):  
Sean Benson ◽  
Regina Beets-Tan

AbstractGenerative adversarial networks (GANs) are known to be a powerful tool in order to correct image aberrations, and even predict entirely synthetic images. We describe and demonstrate a method to use GANs trained from multi-modal magnetic resonance images as a 3-channel input. The training of the generative network was performed using only healthy images together with pseudo-random irregular masks. The dataset consisted of just 20 people. The resulting model was then used to detect anomalies real patient images in which the anomaly was a tumour. The search was performed using no prior knowledge of the tumour location, if indeed a tumour was present. Resulting accuracies are observed to vary significantly on the size of the anomaly. The area under the receiver operator characteristic curve is observed to be greater than 0.75 for anomaly sizes greater than 4 cm2.


Sign in / Sign up

Export Citation Format

Share Document