scholarly journals Self-adaptive Re-weighted Adversarial Domain Adaptation

Author(s):  
Shanshan Wang ◽  
Lei Zhang

Existing adversarial domain adaptation methods mainly consider the marginal distribution and these methods may lead to either under transfer or negative transfer. To address this problem, we present a self-adaptive re-weighted adversarial domain adaptation approach, which tries to enhance domain alignment from the perspective of conditional distribution. In order to promote positive transfer and combat negative transfer, we reduce the weight of the adversarial loss for aligned features while increasing the adversarial force for those poorly aligned measured by the conditional entropy. Additionally, triplet loss leveraging source samples and pseudo-labeled target samples is employed on the confusing domain. Such metric loss ensures the distance of the intra-class sample pairs closer than the inter-class pairs to achieve the class-level alignment. In this way, the high accurate pseudolabeled target samples and semantic alignment can be captured simultaneously in the co-training process. Our method achieved low joint error of the ideal source and target hypothesis. The expected target error can then be upper bounded following Ben-David’s theorem. Empirical evidence demonstrates that the proposed model outperforms state of the arts on standard domain adaptation datasets.

Sensors ◽  
2020 ◽  
Vol 20 (1) ◽  
pp. 320 ◽  
Author(s):  
Xiaodong Wang ◽  
Feng Liu

Recently, deep learning methods are becomingincreasingly popular in the field of fault diagnosis and achieve great success. However, since the rotation speeds and load conditions of rotating machines are subject to change during operations, the distribution of labeled training dataset for intelligent fault diagnosis model is different from the distribution of unlabeled testing dataset, where domain shift occurs. The performance of the fault diagnosis may significantly degrade due to this domain shift problem. Unsupervised domain adaptation has been proposed to alleviate this problem by aligning the distribution between labeled source domain and unlabeled target domain. In this paper, we propose triplet loss guided adversarial domain adaptation method (TLADA) for bearing fault diagnosis by jointly aligning the data-level and class-level distribution. Data-level alignment is achieved using Wasserstein distance-based adversarial approach, and the discrepancy of distributions in feature space is further minimized at class level by the triplet loss. Unlike other center loss-based class-level alignment approaches, which hasto compute the class centers for each class and minimize the distance of same class center from different domain, the proposed TLADA method concatenates 2 mini-batches from source and target domain into a single mini-batch and imposes triplet loss to the whole mini-batch ignoring the domains. Therefore, the overhead of updating the class center is eliminated. The effectiveness of the proposed method is validated on CWRU dataset and Paderborn dataset through extensive transfer fault diagnosis experiments.


2020 ◽  
Vol 34 (04) ◽  
pp. 6615-6622 ◽  
Author(s):  
Guanglei Yang ◽  
Haifeng Xia ◽  
Mingli Ding ◽  
Zhengming Ding

Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information. The conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure. To balance the mitigation of domain gap and the preservation of the inherent structure, we propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains. Specifically, two cross-domain generators are employed to synthesize one domain conditioned on the other. The performance of our proposed method can be further enhanced by the consistent classifiers and the cross-domain alignment constraints. We also design two classifiers which are jointly optimized to maximize the consistency on target sample prediction. Extensive experiments verify that our proposed model outperforms the state-of-the-art on standard cross domain visual benchmarks.


2021 ◽  
pp. 1-7
Author(s):  
Rong Chen ◽  
Chongguang Ren

Domain adaptation aims to solve the problems of lacking labels. Most existing works of domain adaptation mainly focus on aligning the feature distributions between the source and target domain. However, in the field of Natural Language Processing, some of the words in different domains convey different sentiment. Thus not all features of the source domain should be transferred, and it would cause negative transfer when aligning the untransferable features. To address this issue, we propose a Correlation Alignment with Attention mechanism for unsupervised Domain Adaptation (CAADA) model. In the model, an attention mechanism is introduced into the transfer process for domain adaptation, which can capture the positively transferable features in source and target domain. Moreover, the CORrelation ALignment (CORAL) loss is utilized to minimize the domain discrepancy by aligning the second-order statistics of the positively transferable features extracted by the attention mechanism. Extensive experiments on the Amazon review dataset demonstrate the effectiveness of CAADA method.


2010 ◽  
Vol 19 (01) ◽  
pp. 275-296 ◽  
Author(s):  
OLGIERD UNOLD

This article introduces a new kind of self-adaptation in discovery mechanism of learning classifier system XCS. Unlike the previous approaches, which incorporate self-adaptive parameters in the representation of an individual, proposed model evolves competitive population of the reduced XCSs, which are able to adapt both classifiers and genetic parameters. The experimental comparisons of self-adaptive mutation rate XCS and standard XCS interacting with 11-bit, 20-bit, and 37-bit multiplexer environment were provided. It has been shown that adapting the mutation rate can give an equivalent or better performance to known good fixed parameter settings, especially for computationally complex tasks. Moreover, the self-adaptive XCS is able to solve the problem of inappropriate for a standard XCS parameters.


2021 ◽  
Author(s):  
Sonu Kumar Jha ◽  
Mohit Kumar ◽  
Vipul Arora ◽  
Sachchida Nand Tripathi ◽  
Vidyanand Motiram Motghare ◽  
...  

<div>Air pollution is a severe problem growing over time. A dense air-quality monitoring network is needed to update the people regarding the air pollution status in cities. A low-cost sensor device (LCSD) based dense air-quality monitoring network is more viable than continuous ambient air quality monitoring stations (CAAQMS). An in-field calibration approach is needed to improve agreements of the LCSDs to CAAQMS. The present work aims to propose a calibration method for PM2.5 using domain adaptation technique to reduce the collocation duration of LCSDs and CAAQMS. A novel calibration approach is proposed in this work for the measured PM2.5 levels of LCSDs. The dataset used for the experimentation consists of PM2.5 values and other parameters (PM10, temperature, and humidity) at hourly duration over a period of three months data. We propose new features, by combining PM2.5, PM10, temperature, and humidity, that significantly improved the performance of calibration. Further, the calibration model is adapted to the target location for a new LCSD with a collocation time of two days. The proposed model shows high correlation coefficient values (R2) and significantly low mean absolute percentage error (MAPE) than that of other baseline models. Thus, the proposed model helps in reducing the collocation time while maintaining high calibration performance.</div>


Author(s):  
Renjun Xu ◽  
Pelen Liu ◽  
Yin Zhang ◽  
Fang Cai ◽  
Jindong Wang ◽  
...  

Domain adaptation (DA) has achieved a resounding success to learn a good classifier by leveraging labeled data from a source domain to adapt to an unlabeled target domain. However, in a general setting when the target domain contains classes that are never observed in the source domain, namely in Open Set Domain Adaptation (OSDA), existing DA methods failed to work because of the interference of the extra unknown classes. This is a much more challenging problem, since it can easily result in negative transfer due to the mismatch between the unknown and known classes. Existing researches are susceptible to misclassification when target domain unknown samples in the feature space distributed near the decision boundary learned from the labeled source domain. To overcome this, we propose Joint Partial Optimal Transport (JPOT), fully utilizing information of not only the labeled source domain but also the discriminative representation of unknown class in the target domain. The proposed joint discriminative prototypical compactness loss can not only achieve intra-class compactness and inter-class separability, but also estimate the mean and variance of the unknown class through backpropagation, which remains intractable for previous methods due to the blindness about the structure of the unknown classes. To our best knowledge, this is the first optimal transport model for OSDA. Extensive experiments demonstrate that our proposed model can significantly boost the performance of open set domain adaptation on standard DA datasets.


Author(s):  
Ziliang Cai ◽  
Lingyue Wang ◽  
Miaomiao Guo ◽  
Guizhi Xu ◽  
Lei Guo ◽  
...  

Emotion plays a significant role in human daily activities, and it can be effectively recognized from EEG signals. However, individual variability limits the generalization of emotion classifiers across subjects. Domain adaptation (DA) is a reliable method to solve the issue. Due to the nonstationarity of EEG, the inferior-quality source domain data bring negative transfer in DA procedures. To solve this problem, an auto-augmentation joint distribution adaptation (AA-JDA) method and a burden-lightened and source-preferred JDA (BLSP-JDA) approach are proposed in this paper. The methods are based on a novel transfer idea, learning the specific knowledge of the target domain from the samples that are appropriate for transfer, which reduces the difficulty of transfer between two domains. On multiple emotion databases, our model shows state-of-the-art performance.


Author(s):  
Ashwini Rahangdale ◽  
Shital Raut

Learning-to-rank (LTR) is a very hot topic of research for information retrieval (IR). LTR framework usually learns the ranking function using available training data that are very cost-effective, time-consuming and biased. When sufficient amount of training data is not available, semi-supervised learning is one of the machine learning paradigms that can be applied to get pseudo label from unlabeled data. Cluster and label is a basic approach for semi-supervised learning to identify the high-density region in data space which is mainly used to support the supervised learning. However, clustering with conventional method may lead to prediction performance which is worse than supervised learning algorithms for application of LTR. Thus, we propose rank preserving clustering (RPC) with PLocalSearch and get pseudo label for unlabeled data. We present semi-supervised learning that adopts clustering-based transductive method and combine it with nonmeasure specific listwise approach to learn the LTR model. Moreover, each cluster follows the multi-task learning to avoid optimization of multiple loss functions. It reduces the training complexity of adopted listwise approach from an exponential order to a polynomial order. Empirical analysis on the standard datasets (LETOR) shows that the proposed model gives better results as compared to other state-of-the-arts.


Author(s):  
Yingying Zhang ◽  
Qiaoyong Zhong ◽  
Liang Ma ◽  
Di Xie ◽  
Shiliang Pu

Person re-identification (ReID) aims to match people across multiple non-overlapping video cameras deployed at different locations. To address this challenging problem, many metric learning approaches have been proposed, among which triplet loss is one of the state-of-the-arts. In this work, we explore the margin between positive and negative pairs of triplets and prove that large margin is beneficial. In particular, we propose a novel multi-stage training strategy which learns incremental triplet margin and improves triplet loss effectively. Multiple levels of feature maps are exploited to make the learned features more discriminative. Besides, we introduce global hard identity searching method to sample hard identities when generating a training batch. Extensive experiments on Market-1501, CUHK03, and DukeMTMCreID show that our approach yields a performance boost and outperforms most existing state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document