CLDA: an adversarial unsupervised domain adaptation method with classifier-level adaptation

2020 ◽  
Vol 79 (45-46) ◽  
pp. 33973-33991
Author(s):  
Zhihai He ◽  
Bo Yang ◽  
Chaoxian Chen ◽  
Qilin Mu ◽  
Zesong Li
2020 ◽  
Vol 27 (4) ◽  
pp. 584-591 ◽  
Author(s):  
Chen Lin ◽  
Steven Bethard ◽  
Dmitriy Dligach ◽  
Farig Sadeque ◽  
Guergana Savova ◽  
...  

Abstract Introduction Classifying whether concepts in an unstructured clinical text are negated is an important unsolved task. New domain adaptation and transfer learning methods can potentially address this issue. Objective We examine neural unsupervised domain adaptation methods, introducing a novel combination of domain adaptation with transformer-based transfer learning methods to improve negation detection. We also want to better understand the interaction between the widely used bidirectional encoder representations from transformers (BERT) system and domain adaptation methods. Materials and Methods We use 4 clinical text datasets that are annotated with negation status. We evaluate a neural unsupervised domain adaptation algorithm and BERT, a transformer-based model that is pretrained on massive general text datasets. We develop an extension to BERT that uses domain adversarial training, a neural domain adaptation method that adds an objective to the negation task, that the classifier should not be able to distinguish between instances from 2 different domains. Results The domain adaptation methods we describe show positive results, but, on average, the best performance is obtained by plain BERT (without the extension). We provide evidence that the gains from BERT are likely not additive with the gains from domain adaptation. Discussion Our results suggest that, at least for the task of clinical negation detection, BERT subsumes domain adaptation, implying that BERT is already learning very general representations of negation phenomena such that fine-tuning even on a specific corpus does not lead to much overfitting. Conclusion Despite being trained on nonclinical text, the large training sets of models like BERT lead to large gains in performance for the clinical negation detection task.


2021 ◽  
Vol 11 (11) ◽  
pp. 5267
Author(s):  
Zhi-Yong Wang ◽  
Dae-Ki Kang

CORrelation ALignment (CORAL) is an unsupervised domain adaptation method that uses a linear transformation to align the covariances of source and target domains. Deep CORAL extends CORAL with a nonlinear transformation using a deep neural network and adds CORAL loss as a part of the total loss to align the covariances of source and target domains. However, there are still two problems to be solved in Deep CORAL: features extracted from AlexNet are not always a good representation of the original data, as well as joint training combined with both the classification and CORAL loss may not be efficient enough to align the distribution of the source and target domain. In this paper, we proposed two strategies: attention to improve the quality of feature maps and the p-norm loss function to align the distribution of the source and target features, further reducing the offset caused by the classification loss function. Experiments on the Office-31 dataset indicate that our proposed methodologies improved Deep CORAL in terms of performance.


Author(s):  
Linlin Wu ◽  
Guohua Peng ◽  
Weidong Yan

In order to solve the problem that low classification accuracy caused by the different distribution of training set and test set, an unsupervised domain adaptation method based on discriminant sample selection (DSS) is proposed. DSS projects the samples of different domains onto a same subspace to reduce the distribution discrepancy between the source domain and the target domain, and weights the source domain instances to make the samples more discriminant. Different from the previous method based on the probability density estimation of samples, DSS tries to obtain the sample weights by solving a quadratic programming problem, which avoids the distribution estimation of samples and can be applied to any fields without suffering from the dimensional trouble caused by high-dimensional density estimation. Finally, DSS congregates the same classes by minimizing the intra-class distance. Experimental results show that the proposed method improves the classification accuracy and robustness.


2020 ◽  
Vol 155 ◽  
pp. 113404 ◽  
Author(s):  
Peng Liu ◽  
Ting Xiao ◽  
Cangning Fan ◽  
Wei Zhao ◽  
Xianglong Tang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document