Asymmetric Pairwise Preference Learning for Heterogeneous One-Class Collaborative Filtering

Author(s):  
Yongxin Ni ◽  
Zhuoxin Zhan ◽  
Weike Pan ◽  
Zhong Ming
2022 ◽  
Vol 40 (1) ◽  
pp. 1-22
Author(s):  
Lianghao Xia ◽  
Chao Huang ◽  
Yong Xu ◽  
Huance Xu ◽  
Xiang Li ◽  
...  

As the deep learning techniques have expanded to real-world recommendation tasks, many deep neural network based Collaborative Filtering (CF) models have been developed to project user-item interactions into latent feature space, based on various neural architectures, such as multi-layer perceptron, autoencoder, and graph neural networks. However, the majority of existing collaborative filtering systems are not well designed to handle missing data. Particularly, in order to inject the negative signals in the training phase, these solutions largely rely on negative sampling from unobserved user-item interactions and simply treating them as negative instances, which brings the recommendation performance degradation. To address the issues, we develop a C ollaborative R eflection-Augmented A utoencoder N etwork (CRANet), that is capable of exploring transferable knowledge from observed and unobserved user-item interactions. The network architecture of CRANet is formed of an integrative structure with a reflective receptor network and an information fusion autoencoder module, which endows our recommendation framework with the ability of encoding implicit user’s pairwise preference on both interacted and non-interacted items. Additionally, a parametric regularization-based tied-weight scheme is designed to perform robust joint training of the two-stage CRANetmodel. We finally experimentally validate CRANeton four diverse benchmark datasets corresponding to two recommendation tasks, to show that debiasing the negative signals of user-item interactions improves the performance as compared to various state-of-the-art recommendation techniques. Our source code is available at https://github.com/akaxlh/CRANet.


2019 ◽  
Vol 215 ◽  
pp. 767-783 ◽  
Author(s):  
Mehrbakhsh Nilashi ◽  
Ali Ahani ◽  
Mohammad Dalvi Esfahani ◽  
Elaheh Yadegaridehkordi ◽  
Sarminah Samad ◽  
...  

2014 ◽  
Vol 26 (12) ◽  
pp. 2896-2924 ◽  
Author(s):  
Hong Li ◽  
Chuanbao Ren ◽  
Luoqing Li

Preference learning has caused great attention in machining learning. In this letter we propose a learning framework for pairwise loss based on empirical risk minimization of U-processes via Rademacher complexity. We first establish a uniform version of Bernstein inequality of U-processes of degree 2 via the entropy methods. Then we estimate the bound of the excess risk by using the Bernstein inequality and peeling skills. Finally, we apply the excess risk bound to the pairwise preference and derive the convergence rates of pairwise preference learning algorithms with squared loss and indicator loss by using the empirical risk minimization with respect to U-processes.


2019 ◽  
Vol 6 (1) ◽  
pp. 329-354 ◽  
Author(s):  
Qinghua Liu ◽  
Marta Crispino ◽  
Ida Scheel ◽  
Valeria Vitelli ◽  
Arnoldo Frigessi

Preference data occur when assessors express comparative opinions about a set of items, by rating, ranking, pair comparing, liking, or clicking. The purpose of preference learning is to ( a) infer on the shared consensus preference of a group of users, sometimes called rank aggregation, or ( b) estimate for each user her individual ranking of the items, when the user indicates only incomplete preferences; the latter is an important part of recommender systems. We provide an overview of probabilistic approaches to preference learning, including the Mallows, Plackett–Luce, and Bradley–Terry models and collaborative filtering, and some of their variations. We illustrate, compare, and discuss the use of these methods by means of an experiment in which assessors rank potatoes, and with a simulation. The purpose of this article is not to recommend the use of one best method but to present a palette of different possibilities for different questions and different types of data.


Sign in / Sign up

Export Citation Format

Share Document