TPSDicyc: Improved Deformation Invariant Cross-domain Medical Image Synthesis

Author(s):  
Chengjia Wang ◽  
Giorgos Papanastasiou ◽  
Sotirios Tsaftaris ◽  
Guang Yang ◽  
Calum Gray ◽  
...  
2021 ◽  
Vol 67 ◽  
pp. 147-160
Author(s):  
Chengjia Wang ◽  
Guang Yang ◽  
Giorgos Papanastasiou ◽  
Sotirios A. Tsaftaris ◽  
David E. Newby ◽  
...  

Author(s):  
Yin Xu ◽  
Yan Li ◽  
Byeong-Seok Shin

Abstract With recent advances in deep learning research, generative models have achieved great achievements and play an increasingly important role in current industrial applications. At the same time, technologies derived from generative methods are also under a wide discussion with researches, such as style transfer, image synthesis and so on. In this work, we treat generative methods as a possible solution to medical image augmentation. We proposed a context-aware generative framework, which can successfully change the gray scale of CT scans but almost without any semantic loss. By producing target images that with specific style / distribution, we greatly increased the robustness of segmentation model after adding generations into training set. Besides, we improved 2– 4% pixel segmentation accuracy over original U-NET in terms of spine segmentation. Lastly, we compared generations produced by networks when using different feature extractors (Vgg, ResNet and DenseNet) and made a detailed analysis on their performances over style transfer.


2018 ◽  
Vol 65 (12) ◽  
pp. 2720-2730 ◽  
Author(s):  
Dong Nie ◽  
Roger Trullo ◽  
Jun Lian ◽  
Li Wang ◽  
Caroline Petitjean ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Yafen Li ◽  
Wen Li ◽  
Jing Xiong ◽  
Jun Xia ◽  
Yaoqin Xie

Cross-modality medical image synthesis between magnetic resonance (MR) images and computed tomography (CT) images has attracted increasing attention in many medical imaging area. Many deep learning methods have been used to generate pseudo-MR/CT images from counterpart modality images. In this study, we used U-Net and Cycle-Consistent Adversarial Networks (CycleGAN), which were typical networks of supervised and unsupervised deep learning methods, respectively, to transform MR/CT images to their counterpart modality. Experimental results show that synthetic images predicted by the proposed U-Net method got lower mean absolute error (MAE), higher structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) in both directions of CT/MR synthesis, especially in synthetic CT image generation. Though synthetic images by the U-Net method has less contrast information than those by the CycleGAN method, the pixel value profile tendency of the synthetic images by the U-Net method is closer to the ground truth images. This work demonstrated that supervised deep learning method outperforms unsupervised deep learning method in accuracy for medical tasks of MR/CT synthesis.


Sign in / Sign up

Export Citation Format

Share Document