Unsupervised Domain Adaptation for Object Detection Using Distribution Matching in Various Feature Level

Author(s):  
Hyoungwoo Park ◽  
Minjeong Ju ◽  
Sangkeun Moon ◽  
Chang D. Yoo
Author(s):  
Woo-han Yun ◽  
Byungok Han ◽  
Jaeyeon Lee ◽  
Jaehong Kim ◽  
Junmo Kim

Author(s):  
Jun Wen ◽  
Risheng Liu ◽  
Nenggan Zheng ◽  
Qian Zheng ◽  
Zhefeng Gong ◽  
...  

Unsupervised domain adaptation methods aim to alleviate performance degradation caused by domain-shift by learning domain-invariant representations. Existing deep domain adaptation methods focus on holistic feature alignment by matching source and target holistic feature distributions, without considering local features and their multi-mode statistics. We show that the learned local feature patterns are more generic and transferable and a further local feature distribution matching enables fine-grained feature alignment. In this paper, we present a method for learning domain-invariant local feature patterns and jointly aligning holistic and local feature statistics. Comparisons to the state-of-the-art unsupervised domain adaptation methods on two popular benchmark datasets demonstrate the superiority of our approach and its effectiveness on alleviating negative transfer.


2021 ◽  
pp. 1-1
Author(s):  
Dayan Guan ◽  
Jiaxing Huang ◽  
Aoran Xiao ◽  
Shijian Lu ◽  
Yanpeng Cao

Author(s):  
Jun Wen ◽  
Nenggan Zheng ◽  
Junsong Yuan ◽  
Zhefeng Gong ◽  
Changyou Chen

Domain adaptation is an important technique to alleviate performance degradation caused by domain shift, e.g., when training and test data come from different domains. Most existing deep adaptation methods focus on reducing domain shift by matching marginal feature distributions through deep transformations on the input features, due to the unavailability of target domain labels. We show that domain shift may still exist via label distribution shift at the classifier, thus deteriorating model performances. To alleviate this issue, we propose an approximate joint distribution matching scheme by exploiting prediction uncertainty. Specifically, we use a Bayesian neural network to quantify prediction uncertainty of a classifier. By imposing distribution matching on both features and labels (via uncertainty), label distribution mismatching in source and target data is effectively alleviated, encouraging the classifier to produce consistent predictions across domains. We also propose a few techniques to improve our method by adaptively reweighting domain adaptation loss to achieve nontrivial distribution matching and stable training. Comparisons with state of the art unsupervised domain adaptation methods on three popular benchmark datasets demonstrate the superiority of our approach, especially on the effectiveness of alleviating negative transfer.


2020 ◽  
Vol 412 ◽  
pp. 115-128
Author(s):  
Xiaona Jin ◽  
Xiaowei Yang ◽  
Bo Fu ◽  
Sentao Chen

2021 ◽  
Author(s):  
Jihan Yang ◽  
Shaoshuai Shi ◽  
Zhe Wang ◽  
Hongsheng Li ◽  
Xiaojuan Qi

Sign in / Sign up

Export Citation Format

Share Document