domain adaption
Recently Published Documents


TOTAL DOCUMENTS

71
(FIVE YEARS 55)

H-INDEX

5
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Zehua Xuan ◽  
Chang-E Ren ◽  
Zhiping Shi ◽  
Yong Guan

2021 ◽  
pp. 115581
Author(s):  
Yingdong Wang ◽  
Jiatong Liu ◽  
Qunsheng Ruan ◽  
Shuocheng Wang ◽  
Chen Wang

Author(s):  
Amey Thakur ◽  
Mega Satish

This paper aims to demonstrate the efficiency of the Adversarial Open Domain Adaption framework for sketch-to-photo synthesis. The unsupervised open domain adaption for generating realistic photos from a hand-drawn sketch is challenging as there is no such sketch of that class for training data. The absence of learning supervision and the huge domain gap between both the freehand drawing and picture domains make it hard. We present an approach that learns both sketch-to-photo and photo-to-sketch generation to synthesise the missing freehand drawings from pictures. Due to the domain gap between synthetic sketches and genuine ones, the generator trained on false drawings may produce unsatisfactory results when dealing with drawings of lacking classes. To address this problem, we offer a simple but effective open-domain sampling and optimization method that “tricks” the generator into considering false drawings as genuine. Our approach generalises the learnt sketch-to-photo and photo-to-sketch mappings from in-domain input to open-domain categories. On the Scribble and SketchyCOCO datasets, we compared our technique to the most current competing methods. For many types of open-domain drawings, our model outperforms impressive results in synthesising accurate colour, substance, and retaining the structural layout.


Sign in / Sign up

Export Citation Format

Share Document