Low-Rank and Sparse Cross-Domain Recommendation Algorithm

Author(s):  
Zhi-Lin Zhao ◽  
Ling Huang ◽  
Chang-Dong Wang ◽  
Dong Huang
2019 ◽  
Vol 366 ◽  
pp. 86-96 ◽  
Author(s):  
Ling Huang ◽  
Zhi-Lin Zhao ◽  
Chang-Dong Wang ◽  
Dong Huang ◽  
Hong-Yang Chao
Keyword(s):  
Low Rank ◽  

IEEE Access ◽  
2018 ◽  
Vol 6 ◽  
pp. 62574-62583
Author(s):  
Xu Yu ◽  
Yu Fu ◽  
Lingwei Xu ◽  
Guozhu Liu

2013 ◽  
Vol 2013 ◽  
pp. 1-6
Author(s):  
Long Wang ◽  
Zhiyong Zeng ◽  
Ruizhi Li ◽  
Hua Pang

According to cross-domain personalized learning resources recommendation, a new personalized learning resources recommendation method is presented in this paper. Firstly, the cross-domain learning resources recommendation model is given. Then, a method of personalized information extraction from web logs is designed by making use of mixed interest measure which is presented in this paper. Finally, a learning resources recommendation algorithm based on transfer learning technology is presented. A time function and the weight constraint of wrong classified samples can be added to the classic TrAdaBoost algorithm. Through the time function, the importance of samples date can be distinguished. The weight constraint can be used to avoid the samples having too big or too small weight. So the Accuracy and the efficiency of algorithm are improved. Experiments on the real world dataset show that the proposed method could improve the quality and efficiency of learning resources recommendation services effectively.


Author(s):  
Xiangjun Shen ◽  
Jinghui Zhou ◽  
Zhongchen Ma ◽  
Bingkun Bao ◽  
Zhengjun Zha

Cross-domain data has become very popular recently since various viewpoints and different sensors tend to facilitate better data representation. In this article, we propose a novel cross-domain object representation algorithm (RLRCA) which not only explores the complexity of multiple relationships of variables by canonical correlation analysis (CCA) but also uses a low rank model to decrease the effect of noisy data. To the best of our knowledge, this is the first try to smoothly integrate CCA and a low-rank model to uncover correlated components across different domains and to suppress the effect of noisy or corrupted data. In order to improve the flexibility of the algorithm to address various cross-domain object representation problems, two instantiation methods of RLRCA are proposed from feature and sample space, respectively. In this way, a better cross-domain object representation can be achieved through effectively learning the intrinsic CCA features and taking full advantage of cross-domain object alignment information while pursuing low rank representations. Extensive experimental results on CMU PIE, Office-Caltech, Pascal VOC 2007, and NUS-WIDE-Object datasets, demonstrate that our designed models have superior performance over several state-of-the-art cross-domain low rank methods in image clustering and classification tasks with various corruption levels.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Wenyun Gao ◽  
Sheng Dai ◽  
Stanley Ebhohimhen Abhadiomhen ◽  
Wei He ◽  
Xinghui Yin

Correlation learning is a technique utilized to find a common representation in cross-domain and multiview datasets. However, most existing methods are not robust enough to handle noisy data. As such, the common representation matrix learned could be influenced easily by noisy samples inherent in different instances of the data. In this paper, we propose a novel correlation learning method based on a low-rank representation, which learns a common representation between two instances of data in a latent subspace. Specifically, we begin by learning a low-rank representation matrix and an orthogonal rotation matrix to handle the noisy samples in one instance of the data so that a second instance of the data can linearly reconstruct the low-rank representation. Our method then finds a similarity matrix that approximates the common low-rank representation matrix much better such that a rank constraint on the Laplacian matrix would reveal the clustering structure explicitly without any spectral postprocessing. Extensive experimental results on ORL, Yale, Coil-20, Caltech 101-20, and UCI digits datasets demonstrate that our method has superior performance than other state-of-the-art compared methods in six evaluation metrics.


Sign in / Sign up

Export Citation Format

Share Document