scholarly journals Cross-Domain Recommendation via Deep Domain Adaptation

Author(s):  
Heishiro Kanagawa ◽  
Hayato Kobayashi ◽  
Nobuyuki Shimizu ◽  
Yukihiro Tagami ◽  
Taiji Suzuki
Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3382
Author(s):  
Zhongwei Zhang ◽  
Mingyu Shao ◽  
Liping Wang ◽  
Sujuan Shao ◽  
Chicheng Ma

As the key component to transmit power and torque, the fault diagnosis of rotating machinery is crucial to guarantee the reliable operation of mechanical equipment. Regrettably, sample class imbalance is a common phenomenon in industrial applications, which causes large cross-domain distribution discrepancies for domain adaptation (DA) and results in performance degradation for most of the existing mechanical fault diagnosis approaches. To address this issue, a novel DA approach that simultaneously reduces the cross-domain distribution difference and the geometric difference is proposed, which is defined as MRMI. This work contains three parts to improve the sample class imbalance issue: (1) A novel distance metric method (MVD) is proposed and applied to improve the performance of marginal distribution adaptation. (2) Manifold regularization is combined with instance reweighting to simultaneously explore the intrinsic manifold structure and remove irrelevant source-domain samples adaptively. (3) The ℓ2-norm regularization is applied as the data preprocessing tool to improve the model generalization performance. The gear and rolling bearing datasets with class imbalanced samples are applied to validate the reliability of MRMI. According to the fault diagnosis results, MRMI can significantly outperform competitive approaches under the condition of sample class imbalance.


Author(s):  
Jiahua Dong ◽  
Yang Cong ◽  
Gan Sun ◽  
Yunsheng Yang ◽  
Xiaowei Xu ◽  
...  

Author(s):  
Sheng-Wei Huang ◽  
Che-Tsung Lin ◽  
Shu-Ping Chen ◽  
Yen-Yi Wu ◽  
Po-Hao Hsu ◽  
...  

Author(s):  
Shu Jiang ◽  
Zuchao Li ◽  
Hai Zhao ◽  
Bao-Liang Lu ◽  
Rui Wang

In recent years, the research on dependency parsing focuses on improving the accuracy of the domain-specific (in-domain) test datasets and has made remarkable progress. However, there are innumerable scenarios in the real world that are not covered by the dataset, namely, the out-of-domain dataset. As a result, parsers that perform well on the in-domain data usually suffer from significant performance degradation on the out-of-domain data. Therefore, to adapt the existing in-domain parsers with high performance to a new domain scenario, cross-domain transfer learning methods are essential to solve the domain problem in parsing. This paper examines two scenarios for cross-domain transfer learning: semi-supervised and unsupervised cross-domain transfer learning. Specifically, we adopt a pre-trained language model BERT for training on the source domain (in-domain) data at the subword level and introduce self-training methods varied from tri-training for these two scenarios. The evaluation results on the NLPCC-2019 shared task and universal dependency parsing task indicate the effectiveness of the adopted approaches on cross-domain transfer learning and show the potential of self-learning to cross-lingual transfer learning.


2019 ◽  
Vol 56 (11) ◽  
pp. 112801
Author(s):  
滕文秀 Wenxiu Teng ◽  
王妮 Ni Wang ◽  
陈泰生 Taisheng Chen ◽  
王本林 Benlin Wang ◽  
陈梦琳 Menglin Chen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document