scholarly journals Show, Adapt and Tell: Adversarial Training of Cross-Domain Image Captioner

Author(s):  
Tseng-Hung Chen ◽  
Yuan-Hong Liao ◽  
Ching-Yao Chuang ◽  
Wan-Ting Hsu ◽  
Jianlong Fu ◽  
...  
Author(s):  
Aibo Guo ◽  
Xinyi Li ◽  
Ning Pang ◽  
Xiang Zhao

Community Q&A forum is a special type of social media that provides a platform to raise questions and to answer them (both by forum participants), to facilitate online information sharing. Currently, community Q&A forums in professional domains have attracted a large number of users by offering professional knowledge. To support information access and save users’ efforts of raising new questions, they usually come with a question retrieval function, which retrieves similar existing questions (and their answers) to a user’s query. However, it can be difficult for community Q&A forums to cover all domains, especially those emerging lately with little labeled data but great discrepancy from existing domains. We refer to this scenario as cross-domain question retrieval. To handle the unique challenges of cross-domain question retrieval, we design a model based on adversarial training, namely, X-QR , which consists of two modules—a domain discriminator and a sentence matcher. The domain discriminator aims at aligning the source and target data distributions and unifying the feature space by domain-adversarial training. With the assistance of the domain discriminator, the sentence matcher is able to learn domain-consistent knowledge for the final matching prediction. To the best of our knowledge, this work is among the first to investigate the domain adaption problem of sentence matching for community Q&A forums question retrieval. The experiment results suggest that the proposed X-QR model offers better performance than conventional sentence matching methods in accomplishing cross-domain community Q&A tasks.


2020 ◽  
Vol 380 ◽  
pp. 125-132
Author(s):  
Yuhu Shan ◽  
Chee Meng Chew ◽  
Wen Feng Lu

2020 ◽  
Vol 34 (04) ◽  
pp. 4028-4035 ◽  
Author(s):  
Aditya Grover ◽  
Christopher Chute ◽  
Rui Shu ◽  
Zhangjie Cao ◽  
Stefano Ermon

Given datasets from multiple domains, a key challenge is to efficiently exploit these data sources for modeling a target domain. Variants of this problem have been studied in many contexts, such as cross-domain translation and domain adaptation. We propose AlignFlow, a generative modeling framework that models each domain via a normalizing flow. The use of normalizing flows allows for a) flexibility in specifying learning objectives via adversarial training, maximum likelihood estimation, or a hybrid of the two methods; and b) learning and exact inference of a shared representation in the latent space of the generative model. We derive a uniform set of conditions under which AlignFlow is marginally-consistent for the different learning objectives. Furthermore, we show that AlignFlow guarantees exact cycle consistency in mapping datapoints from a source domain to target and back to the source domain. Empirically, AlignFlow outperforms relevant baselines on image-to-image translation and unsupervised domain adaptation and can be used to simultaneously interpolate across the various domains using the learned representation.


Author(s):  
Hongji Wang ◽  
Heinrich Dinkel ◽  
Shuai Wang ◽  
Yanmin Qian ◽  
Kai Yu

2020 ◽  
Author(s):  
Ning Ding ◽  
Dingkun Long ◽  
Guangwei Xu ◽  
Muhua Zhu ◽  
Pengjun Xie ◽  
...  

Information ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 224
Author(s):  
Hui Tao ◽  
Jun He ◽  
Quanjie Cao ◽  
Lei Zhang

Domain adaptation is critical to transfer the invaluable source domain knowledge to the target domain. In this paper, for a particular visual attention model, saying hard attention, we consider to adapt the learned hard attention to the unlabeled target domain. To tackle this kind of hard attention adaptation, a novel adversarial reward strategy is proposed to train the policy of the target domain agent. In this adversarial training framework, the target domain agent competes with the discriminator which takes the attention features generated from the both domain agents as input and tries its best to distinguish them, and thus the target domain policy is learned to align the local attention feature to its source domain counterpart. We evaluated our model on the benchmarks of the cross-domain tasks, such as the centered digits datasets and the enlarged non-centered digits datasets. The experimental results show that our model outperforms the ADDA and other existing methods.


2017 ◽  
Author(s):  
Motoki Sato ◽  
Hitoshi Manabe ◽  
Hiroshi Noji ◽  
Yuji Matsumoto

2020 ◽  
Author(s):  
Qi Peng ◽  
Changmeng Zheng ◽  
Yi Cai ◽  
Tao Wang ◽  
Haoran Xie ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document