recommender systems
Recently Published Documents





2022 ◽  
Vol 40 (2) ◽  
pp. 1-31
Masoud Mansoury ◽  
Himan Abdollahpouri ◽  
Mykola Pechenizkiy ◽  
Bamshad Mobasher ◽  
Robin Burke

Fairness is a critical system-level objective in recommender systems that has been the subject of extensive recent research. A specific form of fairness is supplier exposure fairness, where the objective is to ensure equitable coverage of items across all suppliers in recommendations provided to users. This is especially important in multistakeholder recommendation scenarios where it may be important to optimize utilities not just for the end user but also for other stakeholders such as item sellers or producers who desire a fair representation of their items. This type of supplier fairness is sometimes accomplished by attempting to increase aggregate diversity to mitigate popularity bias and to improve the coverage of long-tail items in recommendations. In this article, we introduce FairMatch, a general graph-based algorithm that works as a post-processing approach after recommendation generation to improve exposure fairness for items and suppliers. The algorithm iteratively adds high-quality items that have low visibility or items from suppliers with low exposure to the users’ final recommendation lists. A comprehensive set of experiments on two datasets and comparison with state-of-the-art baselines show that FairMatch, although it significantly improves exposure fairness and aggregate diversity, maintains an acceptable level of relevance of the recommendations.

2022 ◽  
Vol 16 (2) ◽  
pp. 1-37
Hangbin Zhang ◽  
Raymond K. Wong ◽  
Victor W. Chu

E-commerce platforms heavily rely on automatic personalized recommender systems, e.g., collaborative filtering models, to improve customer experience. Some hybrid models have been proposed recently to address the deficiency of existing models. However, their performances drop significantly when the dataset is sparse. Most of the recent works failed to fully address this shortcoming. At most, some of them only tried to alleviate the problem by considering either user side or item side content information. In this article, we propose a novel recommender model called Hybrid Variational Autoencoder (HVAE) to improve the performance on sparse datasets. Different from the existing approaches, we encode both user and item information into a latent space for semantic relevance measurement. In parallel, we utilize collaborative filtering to find the implicit factors of users and items, and combine their outputs to deliver a hybrid solution. In addition, we compare the performance of Gaussian distribution and multinomial distribution in learning the representations of the textual data. Our experiment results show that HVAE is able to significantly outperform state-of-the-art models with robust performance.

2022 ◽  
Vol 17 (1) ◽  
pp. 46-58
Ludovik Coba ◽  
Roberto Confalonieri ◽  
Markus Zanker

2022 ◽  
Vol 43 ◽  
pp. 100439
Yassine Himeur ◽  
Aya Sayed ◽  
Abdullah Alsalemi ◽  
Faycal Bensaali ◽  
Abbes Amira ◽  

2022 ◽  
Vol 40 (1) ◽  
pp. 1-26
Shanlei Mu ◽  
Yaliang Li ◽  
Wayne Xin Zhao ◽  
Siqing Li ◽  
Ji-Rong Wen

In recommender systems, it is essential to understand the underlying factors that affect user-item interaction. Recently, several studies have utilized disentangled representation learning to discover such hidden factors from user-item interaction data, which shows promising results. However, without any external guidance signal, the learned disentangled representations lack clear meanings, and are easy to suffer from the data sparsity issue. In light of these challenges, we study how to leverage knowledge graph (KG) to guide the disentangled representation learning in recommender systems. The purpose for incorporating KG is twofold, making the disentangled representations interpretable and resolving data sparsity issue. However, it is not straightforward to incorporate KG for improving disentangled representations, because KG has very different data characteristics compared with user-item interactions. We propose a novel K nowledge-guided D isentangled R epresentations approach ( KDR ) to utilizing KG to guide the disentangled representation learning in recommender systems. The basic idea, is to first learn more interpretable disentangled dimensions (explicit disentangled representations) based on structural KG, and then align implicit disentangled representations learned from user-item interaction with the explicit disentangled representations. We design a novel alignment strategy based on mutual information maximization. It enables the KG information to guide the implicit disentangled representation learning, and such learned disentangled representations will correspond to semantic information derived from KG. Finally, the fused disentangled representations are optimized to improve the recommendation performance. Extensive experiments on three real-world datasets demonstrate the effectiveness of the proposed model in terms of both performance and interpretability.

2022 ◽  
Vol 40 (1) ◽  
pp. 1-22
Amir H. Jadidinejad ◽  
Craig Macdonald ◽  
Iadh Ounis

Recommendation systems are often evaluated based on user’s interactions that were collected from an existing, already deployed recommendation system. In this situation, users only provide feedback on the exposed items and they may not leave feedback on other items since they have not been exposed to them by the deployed system. As a result, the collected feedback dataset that is used to evaluate a new model is influenced by the deployed system, as a form of closed loop feedback. In this article, we show that the typical offline evaluation of recommender systems suffers from the so-called Simpson’s paradox. Simpson’s paradox is the name given to a phenomenon observed when a significant trend appears in several different sub-populations of observational data but disappears or is even reversed when these sub-populations are combined together. Our in-depth experiments based on stratified sampling reveal that a very small minority of items that are frequently exposed by the deployed system plays a confounding factor in the offline evaluation of recommendation systems. In addition, we propose a novel evaluation methodology that takes into account the confounder, i.e., the deployed system’s characteristics. Using the relative comparison of many recommendation models as in the typical offline evaluation of recommender systems, and based on the Kendall rank correlation coefficient, we show that our proposed evaluation methodology exhibits statistically significant improvements of 14% and 40% on the examined open loop datasets (Yahoo! and Coat), respectively, in reflecting the true ranking of systems with an open loop (randomised) evaluation in comparison to the standard evaluation.

2022 ◽  
Pablo Sánchez ◽  
Alejandro Bellogín

Point-of-Interest recommendation is an increasing research and developing area within the widely adopted technologies known as Recommender Systems. Among them, those that exploit information coming from Location-Based Social Networks (LBSNs) are very popular nowadays and could work with different information sources, which pose several challenges and research questions to the community as a whole. We present a systematic review focused on the research done in the last 10 years about this topic. We discuss and categorize the algorithms and evaluation methodologies used in these works and point out the opportunities and challenges that remain open in the field. More specifically, we report the leading recommendation techniques and information sources that have been exploited more often (such as the geographical signal and deep learning approaches) while we also alert about the lack of reproducibility in the field that may hinder real performance improvements.

AI Magazine ◽  
2022 ◽  
Vol 42 (3) ◽  
pp. 43-54
Paolo Cremonesi ◽  
Dietmar Jannach

Scholars in algorithmic recommender systems research have developed a largely standardized scientific method, where progress is claimed by showing that a new algorithm outperforms existing ones on or more accuracy measures. In theory, reproducing and thereby verifying such improvements is easy, as it merely involves the execution of the experiment code on the same data. However, as recent work shows, the reported progress is often only virtual, because of a number of issues related to (i) a lack of reproducibility, (ii) technical and theoretical flaws, and (iii) scholarship practices that are strongly prone to researcher biases. As a result, several recent works could show that the latest published algorithms actually do not outperform existing methods when evaluated independently. Despite these issues, we currently see no signs of a crisis, where researchers re-think their scientific method, but rather a situation of stagnation, where researchers continue to focus on the same topics. In this paper, we discuss these issues, analyze their potential underlying reasons, and outline a set of guidelines to ensure progress in recommender systems research.

Sign in / Sign up

Export Citation Format

Share Document