Triple Adversarial Learning and Multi-view Imaginative Reasoning for Unsupervised Domain Adaptation Person Re-identification

Author(s):  
Huafeng Li ◽  
Neng Dong ◽  
Zhengtao Yu ◽  
Dapeng Tao ◽  
Guanqiu Qi
2020 ◽  
Vol 393 ◽  
pp. 27-37 ◽  
Author(s):  
Rongbo Shen ◽  
Jianhua Yao ◽  
Kezhou Yan ◽  
Kuan Tian ◽  
Cheng Jiang ◽  
...  

2020 ◽  
Vol 34 (05) ◽  
pp. 7480-7487
Author(s):  
Yu Cao ◽  
Meng Fang ◽  
Baosheng Yu ◽  
Joey Tianyi Zhou

Reading comprehension (RC) has been studied in a variety of datasets with the boosted performance brought by deep neural networks. However, the generalization capability of these models across different domains remains unclear. To alleviate the problem, we investigate unsupervised domain adaptation on RC, wherein a model is trained on the labeled source domain and to be applied to the target domain with only unlabeled samples. We first show that even with the powerful BERT contextual representation, a model can not generalize well from one domain to another. To solve this, we provide a novel conditional adversarial self-training method (CASe). Specifically, our approach leverages a BERT model fine-tuned on the source dataset along with the confidence filtering to generate reliable pseudo-labeled samples in the target domain for self-training. On the other hand, it further reduces domain distribution discrepancy through conditional adversarial learning across domains. Extensive experiments show our approach achieves comparable performance to supervised models on multiple large-scale benchmark datasets.


Sign in / Sign up

Export Citation Format

Share Document