scholarly journals Cross domain adaptation by learning partially shared classifiers and weighting source data points in the shared subspaces

2016 ◽  
Vol 29 (6) ◽  
pp. 237-248 ◽  
Author(s):  
Hongqi Wang ◽  
Anfeng Xu ◽  
Shanshan Wang ◽  
Sunny Chughtai
2020 ◽  
Vol 39 (6) ◽  
pp. 8149-8159
Author(s):  
Ping Li ◽  
Zhiwei Ni ◽  
Xuhui Zhu ◽  
Juan Song

Domain adaptation (DA) aims to train a robust predictor by transferring rich knowledge from a well-labeled source domain to annotate a newly coming target domain; however, the two domains are usually drawn from very different distributions. Most current methods either learn the common features by matching inter-domain feature distributions and training the classifier separately or align inter-domain label distributions to directly obtain an adaptive classifier based on the original features despite feature distortion. Moreover, intra-domain information may be greatly degraded during the DA process; i.e., the source data samples from different classes might grow closer. To this end, this paper proposes a novel DA approach, referred to as inter-class distribution alienation and inter-domain distribution alignment based on manifold embedding (IDAME). Specifically, IDAME commits to adapting the classifier on the Grassmann manifold by using structural risk minimization, where inter-domain feature distributions are aligned to mitigate feature distortion, and the target pseudo labels are exploited using the distances on the Grassmann manifold. During the classifier adaptation process, we simultaneously consider the inter-class distribution alienation, the inter-domain distribution alignment, and the manifold consistency. Extensive experiments validate that IDAME can outperform several comparative state-of-the-art methods on real-world cross-domain image datasets.


Author(s):  
Si Wu ◽  
Jian Zhong ◽  
Wenming Cao ◽  
Rui Li ◽  
Zhiwen Yu ◽  
...  

For unsupervised domain adaptation, the process of learning domain-invariant representations could be dominated by the labeled source data, such that the specific characteristics of the target domain may be ignored. In order to improve the performance in inferring target labels, we propose a targetspecific network which is capable of learning collaboratively with a domain adaptation network, instead of directly minimizing domain discrepancy. A clustering regularization is also utilized to improve the generalization capability of the target-specific network by forcing target data points to be close to accumulated class centers. As this network learns and specializes to the target domain, its performance in inferring target labels improves, which in turn facilitates the learning process of the adaptation network. Therefore, there is a mutually beneficial relationship between these two networks. We perform extensive experiments on multiple digit and object datasets, and the effectiveness and superiority of the proposed approach is presented and verified on multiple visual adaptation benchmarks, e.g., we improve the state-ofthe-art on the task of MNIST→SVHN from 76.5% to 84.9% without specific augmentation.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3382
Author(s):  
Zhongwei Zhang ◽  
Mingyu Shao ◽  
Liping Wang ◽  
Sujuan Shao ◽  
Chicheng Ma

As the key component to transmit power and torque, the fault diagnosis of rotating machinery is crucial to guarantee the reliable operation of mechanical equipment. Regrettably, sample class imbalance is a common phenomenon in industrial applications, which causes large cross-domain distribution discrepancies for domain adaptation (DA) and results in performance degradation for most of the existing mechanical fault diagnosis approaches. To address this issue, a novel DA approach that simultaneously reduces the cross-domain distribution difference and the geometric difference is proposed, which is defined as MRMI. This work contains three parts to improve the sample class imbalance issue: (1) A novel distance metric method (MVD) is proposed and applied to improve the performance of marginal distribution adaptation. (2) Manifold regularization is combined with instance reweighting to simultaneously explore the intrinsic manifold structure and remove irrelevant source-domain samples adaptively. (3) The ℓ2-norm regularization is applied as the data preprocessing tool to improve the model generalization performance. The gear and rolling bearing datasets with class imbalanced samples are applied to validate the reliability of MRMI. According to the fault diagnosis results, MRMI can significantly outperform competitive approaches under the condition of sample class imbalance.


Author(s):  
Jiahua Dong ◽  
Yang Cong ◽  
Gan Sun ◽  
Yunsheng Yang ◽  
Xiaowei Xu ◽  
...  

Author(s):  
Sheng-Wei Huang ◽  
Che-Tsung Lin ◽  
Shu-Ping Chen ◽  
Yen-Yi Wu ◽  
Po-Hao Hsu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document