Fusion topic model transfer learning for cross-domain recommendation

Author(s):  
Lei Wu ◽  
Wensheng Zhang ◽  
Jue Wang
Author(s):  
Supaporn Tantanasiriwong ◽  
Sumanta Guha ◽  
Paul Janecek ◽  
Choochart Haruechaiyasak ◽  
Leif Azzopardi
Keyword(s):  

2018 ◽  
Vol 23 (14) ◽  
pp. 5431-5442 ◽  
Author(s):  
Farhan Hassan Khan ◽  
Usman Qamar ◽  
Saba Bashir

Author(s):  
Shu Jiang ◽  
Zuchao Li ◽  
Hai Zhao ◽  
Bao-Liang Lu ◽  
Rui Wang

In recent years, the research on dependency parsing focuses on improving the accuracy of the domain-specific (in-domain) test datasets and has made remarkable progress. However, there are innumerable scenarios in the real world that are not covered by the dataset, namely, the out-of-domain dataset. As a result, parsers that perform well on the in-domain data usually suffer from significant performance degradation on the out-of-domain data. Therefore, to adapt the existing in-domain parsers with high performance to a new domain scenario, cross-domain transfer learning methods are essential to solve the domain problem in parsing. This paper examines two scenarios for cross-domain transfer learning: semi-supervised and unsupervised cross-domain transfer learning. Specifically, we adopt a pre-trained language model BERT for training on the source domain (in-domain) data at the subword level and introduce self-training methods varied from tri-training for these two scenarios. The evaluation results on the NLPCC-2019 shared task and universal dependency parsing task indicate the effectiveness of the adopted approaches on cross-domain transfer learning and show the potential of self-learning to cross-lingual transfer learning.


2021 ◽  
Author(s):  
Mingze Sun ◽  
Daiyue Xue ◽  
Weipeng Wang ◽  
Qifu Hu ◽  
Jianping Yu

Sign in / Sign up

Export Citation Format

Share Document