Mask Embedding for Realistic High-Resolution Medical Image Synthesis

Author(s):  
Yinhao Ren ◽  
Zhe Zhu ◽  
Yingzhou Li ◽  
Dehan Kong ◽  
Rui Hou ◽  
...  
2020 ◽  
Vol 30 (4) ◽  
pp. 305-314
Author(s):  
Lukas Fetty ◽  
Mikael Bylund ◽  
Peter Kuess ◽  
Gerd Heilemann ◽  
Tufve Nyholm ◽  
...  

Author(s):  
Yin Xu ◽  
Yan Li ◽  
Byeong-Seok Shin

Abstract With recent advances in deep learning research, generative models have achieved great achievements and play an increasingly important role in current industrial applications. At the same time, technologies derived from generative methods are also under a wide discussion with researches, such as style transfer, image synthesis and so on. In this work, we treat generative methods as a possible solution to medical image augmentation. We proposed a context-aware generative framework, which can successfully change the gray scale of CT scans but almost without any semantic loss. By producing target images that with specific style / distribution, we greatly increased the robustness of segmentation model after adding generations into training set. Besides, we improved 2– 4% pixel segmentation accuracy over original U-NET in terms of spine segmentation. Lastly, we compared generations produced by networks when using different feature extractors (Vgg, ResNet and DenseNet) and made a detailed analysis on their performances over style transfer.


2018 ◽  
Vol 468 ◽  
pp. 142-154 ◽  
Author(s):  
Hui Liu ◽  
Jun Xu ◽  
Yan Wu ◽  
Qiang Guo ◽  
Bulat Ibragimov ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Linyan Li ◽  
Yu Sun ◽  
Fuyuan Hu ◽  
Tao Zhou ◽  
Xuefeng Xi ◽  
...  

In this paper, we propose an Attentional Concatenation Generative Adversarial Network (ACGAN) aiming at generating 1024 × 1024 high-resolution images. First, we propose a multilevel cascade structure, for text-to-image synthesis. During training progress, we gradually add new layers and, at the same time, use the results and word vectors from the previous layer as inputs to the next layer to generate high-resolution images with photo-realistic details. Second, the deep attentional multimodal similarity model is introduced into the network, and we match word vectors with images in a common semantic space to compute a fine-grained matching loss for training the generator. In this way, we can pay attention to the fine-grained information of the word level in the semantics. Finally, the measure of diversity is added to the discriminator, which enables the generator to obtain more diverse gradient directions and improve the diversity of generated samples. The experimental results show that the inception scores of the proposed model on the CUB and Oxford-102 datasets have reached 4.48 and 4.16, improved by 2.75% and 6.42% compared to Attentional Generative Adversarial Networks (AttenGAN). The ACGAN model has a better effect on text-generated images, and the resulting image is closer to the real image.


Sign in / Sign up

Export Citation Format

Share Document