Multisource I-Vectors Domain Adaptation Using Maximum Mean Discrepancy Based Autoencoders

2018 ◽  
Vol 26 (12) ◽  
pp. 2412-2422 ◽  
Author(s):  
Wei-wei Lin ◽  
Man-Wai Mak ◽  
Jen-Tzung Chien
2020 ◽  
Vol 319 ◽  
pp. 03001
Author(s):  
Weigui Li ◽  
Zhuqing Yuan ◽  
Wenyu Sun ◽  
Yongpan Liu

Recently, deep learning algorithms have been widely into fault diagnosis in the intelligent manufacturing field. To tackle the transfer problem due to various working conditions and insufficient labeled samples, a conditional maximum mean discrepancy (CMMD) based domain adaptation method is proposed. Existing transfer approaches mainly focus on aligning the single representation distributions, which only contains partial feature information. Inspired by the Inception module, multi-representation domain adaptation is introduced to improve classification accuracy and generalization ability for cross-domain bearing fault diagnosis. And CMMD-based method is adopted to minimize the discrepancy between the source and the target. Finally, the unsupervised learning method with unlabeled target data can promote the practical application of the proposed algorithm. According to the experimental results on the standard dataset, the proposed method can effectively alleviate the domain shift problem.


2020 ◽  
pp. 1-18
Author(s):  
Chuangji Meng ◽  
Cunlu Xu ◽  
Qin Lei ◽  
Wei Su ◽  
Jinzhao Wu

Recent studies have revealed that deep networks can learn transferable features that generalize well to novel tasks with little or unavailable labeled data for domain adaptation. However, justifying which components of the feature representations can reason about original joint distributions using JMMD within the regime of deep architecture remains unclear. We present a new backpropagation algorithm for JMMD called the Balanced Joint Maximum Mean Discrepancy (B-JMMD) to further reduce the domain discrepancy. B-JMMD achieves the effect of balanced distribution adaptation for deep network architecture, and can be treated as an improved version of JMMD’s backpropagation algorithm. The proposed method leverages the importance of marginal and conditional distributions behind multiple domain-specific layers across domains adaptively to get a good match for the joint distributions in a second-order reproducing kernel Hilbert space. The learning of the proposed method can be performed technically by a special form of stochastic gradient descent, in which the gradient is computed by backpropagation with a strategy of balanced distribution adaptation. Theoretical analysis shows that the proposed B-JMMD is superior to JMMD method. Experiments confirm that our method yields state-of-the-art results with standard datasets.


2022 ◽  
Vol 40 (1) ◽  
pp. 1-29
Author(s):  
Hanrui Wu ◽  
Qingyao Wu ◽  
Michael K. Ng

Domain adaptation aims at improving the performance of learning tasks in a target domain by leveraging the knowledge extracted from a source domain. To this end, one can perform knowledge transfer between these two domains. However, this problem becomes extremely challenging when the data of these two domains are characterized by different types of features, i.e., the feature spaces of the source and target domains are different, which is referred to as heterogeneous domain adaptation (HDA). To solve this problem, we propose a novel model called Knowledge Preserving and Distribution Alignment (KPDA), which learns an augmented target space by jointly minimizing information loss and maximizing domain distribution alignment. Specifically, we seek to discover a latent space, where the knowledge is preserved by exploiting the Laplacian graph terms and reconstruction regularizations. Moreover, we adopt the Maximum Mean Discrepancy to align the distributions of the source and target domains in the latent space. Mathematically, KPDA is formulated as a minimization problem with orthogonal constraints, which involves two projection variables. Then, we develop an algorithm based on the Gauss–Seidel iteration scheme and split the problem into two subproblems, which are solved by searching algorithms based on the Barzilai–Borwein (BB) stepsize. Promising results demonstrate the effectiveness of the proposed method.


Author(s):  
Wei Wang ◽  
Haojie Li ◽  
Zhengming Ding ◽  
Feiping Nie ◽  
Junyang Chen ◽  
...  

2020 ◽  
Vol 22 (9) ◽  
pp. 2420-2433
Author(s):  
Hongliang Yan ◽  
Zhetao Li ◽  
Qilong Wang ◽  
Peihua Li ◽  
Yong Xu ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Bingru Yang ◽  
Qi Li ◽  
Liang Chen ◽  
Changqing Shen ◽  
Sundararajan Natarajan

Bearing fault diagnosis plays a vitally important role in practical industrial scenarios. Deep learning-based fault diagnosis methods are usually performed on the hypothesis that the training set and test set obey the same probability distribution, which is hard to satisfy under the actual working conditions. This paper proposes a novel multilayer domain adaptation (MLDA) method, which can diagnose the compound fault and single fault of multiple sizes simultaneously. A special designed residual network for the fault diagnosis task is pretrained to extract domain-invariant features. The multikernel maximum mean discrepancy (MK-MMD) and pseudo-label learning are adopted in multiple layers to take both marginal distributions and conditional distributions into consideration. A total of 12 transfer tasks in the fault diagnosis problem are conducted to verify the performance of MLDA. Through the comparisons of different signal processing methods, different parameter settings, and different models, it is proved that the proposed MLDA model can effectively extract domain-invariant features and achieve satisfying results.


Author(s):  
A. Paul ◽  
K. Vogt ◽  
F. Rottensteiner ◽  
J. Ostermann ◽  
C. Heipke

In this paper we deal with the problem of measuring the similarity between training and tests datasets in the context of transfer learning (TL) for image classification. TL tries to transfer knowledge from a source domain, where labelled training samples are abundant but the data may follow a different distribution, to a target domain, where labelled training samples are scarce or even unavailable, assuming that the domains are related. Thus, the requirements w.r.t. the availability of labelled training samples in the target domain are reduced. In particular, if no labelled target data are available, it is inherently difficult to find a robust measure of relatedness between the source and target domains. This is of crucial importance for the performance of TL, because the knowledge transfer between unrelated data may lead to negative transfer, i.e. to a decrease of classification performance after transfer. We address the problem of measuring the relatedness between source and target datasets and investigate three different strategies to predict and, consequently, to avoid negative transfer in this paper. The first strategy is based on circular validation. The second strategy relies on the Maximum Mean Discrepancy (MMD) similarity metric, whereas the third one is an extension of MMD which incorporates the knowledge about the class labels in the source domain. Our method is evaluated using two different benchmark datasets. The experiments highlight the strengths and weaknesses of the investigated methods. We also show that it is possible to reduce the amount of negative transfer using these strategies for a TL method and to generate a consistent performance improvement over the whole dataset.


Sign in / Sign up

Export Citation Format

Share Document