scholarly journals User-controllable Multi-texture Synthesis with Generative Adversarial Networks

Author(s):  
Aibek Alanov ◽  
Max Kochurov ◽  
Denis Volkhonskiy ◽  
Daniil Yashkov ◽  
Evgeny Burnaev ◽  
...  
2018 ◽  
Vol 47 (2) ◽  
pp. 203005
Author(s):  
余思泉 Yu Siquan ◽  
韩 志 Han Zhi ◽  
唐延东 Tang Yandong ◽  
吴成东 Wu Chengdong

2020 ◽  
Vol 34 (07) ◽  
pp. 12894-12901
Author(s):  
Yicheng Zhang ◽  
Lei Li ◽  
Li Song ◽  
Rong Xie ◽  
Wenjun Zhang

Clothing transfer is a challenging task in computer vision where the goal is to transfer the human clothing style in an input image conditioned on a given language description. However, existing approaches have limited ability in delicate colorization and texture synthesis with a conventional fully convolutional generator. To tackle this problem, we propose a novel semantic-based Fused Attention model for Clothing Transfer (FACT), which allows fine-grained synthesis, high global consistency and plausible hallucination in images. Towards this end, we incorporate two attention modules based on spatial levels: (i) soft attention that searches for the most related positions in sentences, and (ii) self-attention modeling long-range dependencies on feature maps. Furthermore, we also develop a stylized channel-wise attention module to capture correlations on feature levels. We effectively fuse these attention modules in the generator and achieve better performances than the state-of-the-art method on the DeepFashion dataset. Qualitative and quantitative comparisons against the baselines demonstrate the effectiveness of our approach.


Sign in / Sign up

Export Citation Format

Share Document