Decoupling Deep Domain Adaptation Method for Class-imbalanced Learning with Domain Discrepancy

Author(s):  
Juchuan Guo ◽  
Yichen Liu ◽  
Zhenyu Wu
Author(s):  
Yuhan Zhang ◽  
Lindong Wu ◽  
Weihua He ◽  
Ziyang Zhang ◽  
Chen Yang ◽  
...  

2019 ◽  
Vol 56 (11) ◽  
pp. 112801
Author(s):  
滕文秀 Wenxiu Teng ◽  
王妮 Ni Wang ◽  
陈泰生 Taisheng Chen ◽  
王本林 Benlin Wang ◽  
陈梦琳 Menglin Chen ◽  
...  

2020 ◽  
pp. 1-11
Author(s):  
Shuyang Wang ◽  
Xiaodong Mu ◽  
Hao He ◽  
Dongfang Yang ◽  
Peng Zhao

Information ◽  
2020 ◽  
Vol 11 (2) ◽  
pp. 68
Author(s):  
Liquan Zhao ◽  
Yan Liu

The transfer learning method is used to extend our existing model to more difficult scenarios, thereby accelerating the training process and improving learning performance. The conditional adversarial domain adaptation method proposed in 2018 is a particular type of transfer learning. It uses the domain discriminator to identify which images the extracted features belong to. The features are obtained from the feature extraction network. The stability of the domain discriminator directly affects the classification accuracy. Here, we propose a new algorithm to improve the predictive accuracy. First, we introduce the Lipschitz constraint condition into domain adaptation. If the constraint condition can be satisfied, the method will be stable. Second, we analyze how to make the gradient satisfy the condition, thereby deducing the modified gradient via the spectrum regularization method. The modified gradient is then used to update the parameter matrix. The proposed method is compared to the ResNet-50, deep adaptation network, domain adversarial neural network, joint adaptation network, and conditional domain adversarial network methods using the datasets that are found in Office-31, ImageCLEF-DA, and Office-Home. The simulations demonstrate that the proposed method has a better performance than other methods with respect to accuracy.


2020 ◽  
Vol 27 (4) ◽  
pp. 584-591 ◽  
Author(s):  
Chen Lin ◽  
Steven Bethard ◽  
Dmitriy Dligach ◽  
Farig Sadeque ◽  
Guergana Savova ◽  
...  

Abstract Introduction Classifying whether concepts in an unstructured clinical text are negated is an important unsolved task. New domain adaptation and transfer learning methods can potentially address this issue. Objective We examine neural unsupervised domain adaptation methods, introducing a novel combination of domain adaptation with transformer-based transfer learning methods to improve negation detection. We also want to better understand the interaction between the widely used bidirectional encoder representations from transformers (BERT) system and domain adaptation methods. Materials and Methods We use 4 clinical text datasets that are annotated with negation status. We evaluate a neural unsupervised domain adaptation algorithm and BERT, a transformer-based model that is pretrained on massive general text datasets. We develop an extension to BERT that uses domain adversarial training, a neural domain adaptation method that adds an objective to the negation task, that the classifier should not be able to distinguish between instances from 2 different domains. Results The domain adaptation methods we describe show positive results, but, on average, the best performance is obtained by plain BERT (without the extension). We provide evidence that the gains from BERT are likely not additive with the gains from domain adaptation. Discussion Our results suggest that, at least for the task of clinical negation detection, BERT subsumes domain adaptation, implying that BERT is already learning very general representations of negation phenomena such that fine-tuning even on a specific corpus does not lead to much overfitting. Conclusion Despite being trained on nonclinical text, the large training sets of models like BERT lead to large gains in performance for the clinical negation detection task.


Sign in / Sign up

Export Citation Format

Share Document