Painting style transfer and 3D interaction by model-based image synthesis

Author(s):  
Tian-Ding Chen
Author(s):  
Yin Xu ◽  
Yan Li ◽  
Byeong-Seok Shin

Abstract With recent advances in deep learning research, generative models have achieved great achievements and play an increasingly important role in current industrial applications. At the same time, technologies derived from generative methods are also under a wide discussion with researches, such as style transfer, image synthesis and so on. In this work, we treat generative methods as a possible solution to medical image augmentation. We proposed a context-aware generative framework, which can successfully change the gray scale of CT scans but almost without any semantic loss. By producing target images that with specific style / distribution, we greatly increased the robustness of segmentation model after adding generations into training set. Besides, we improved 2– 4% pixel segmentation accuracy over original U-NET in terms of spine segmentation. Lastly, we compared generations produced by networks when using different feature extractors (Vgg, ResNet and DenseNet) and made a detailed analysis on their performances over style transfer.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Hyunhee Lee ◽  
Jaechoon Jo ◽  
Heuiseok Lim

Due to institutional and privacy issues, medical imaging researches are confronted with serious data scarcity. Image synthesis using generative adversarial networks provides a generic solution to the lack of medical imaging data. We synthesize high-quality brain tumor-segmented MR images, which consists of two tasks: synthesis and segmentation. We performed experiments with two different generative networks, the first using the ResNet model, which has significant advantages of style transfer, and the second, the U-Net model, one of the most powerful models for segmentation. We compare the performance of each model and propose a more robust model for synthesizing brain tumor-segmented MR images. Although ResNet produced better-quality images than did U-Net for the same samples, it used a great deal of memory and took much longer to train. U-Net, meanwhile, segmented the brain tumors more accurately than did ResNet.


1991 ◽  
Vol 22 (13) ◽  
pp. 59-69 ◽  
Author(s):  
Shigeo Morishima ◽  
Seiji Kobayashi ◽  
Hiroshi Harashima

Author(s):  
Shugo Yamaguchi ◽  
Takuya Kato ◽  
Tsukasa Fukusato ◽  
Chie Furusawa ◽  
Shigeo Morishima

2019 ◽  
Vol 31 (5) ◽  
pp. 808
Author(s):  
Yizhen Chen ◽  
Yuanyuan Pu ◽  
Dan Xu ◽  
Wenwu Yang ◽  
Wenhua Qian ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document