scholarly journals Relation Representation Learning Via Signed Graph Mutual Information Maximization for Trust Prediction

Symmetry ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 115
Author(s):  
Yongjun Jing ◽  
Hao Wang ◽  
Kun Shao ◽  
Xing Huo

Trust prediction is essential to enhancing reliability and reducing risk from the unreliable node, especially for online applications in open network environments. An essential fact in trust prediction is to measure the relation of both the interacting entities accurately. However, most of the existing methods infer the trust relation between interacting entities usually rely on modeling the similarity between nodes on a graph and ignore semantic relation and the influence of negative links (e.g., distrust relation). In this paper, we proposed a relation representation learning via signed graph mutual information maximization (called SGMIM). In SGMIM, we incorporate a translation model and positive point-wise mutual information to enhance the relation representations and adopt Mutual Information Maximization to align the entity and relation semantic spaces. Moreover, we further develop a sign prediction model for making accurate trust predictions. We conduct link sign prediction in trust networks based on learned the relation representation. Extensive experimental results in four real-world datasets on trust prediction task show that SGMIM significantly outperforms state-of-the-art baseline methods.

2022 ◽  
Vol 40 (1) ◽  
pp. 1-26
Author(s):  
Shanlei Mu ◽  
Yaliang Li ◽  
Wayne Xin Zhao ◽  
Siqing Li ◽  
Ji-Rong Wen

In recommender systems, it is essential to understand the underlying factors that affect user-item interaction. Recently, several studies have utilized disentangled representation learning to discover such hidden factors from user-item interaction data, which shows promising results. However, without any external guidance signal, the learned disentangled representations lack clear meanings, and are easy to suffer from the data sparsity issue. In light of these challenges, we study how to leverage knowledge graph (KG) to guide the disentangled representation learning in recommender systems. The purpose for incorporating KG is twofold, making the disentangled representations interpretable and resolving data sparsity issue. However, it is not straightforward to incorporate KG for improving disentangled representations, because KG has very different data characteristics compared with user-item interactions. We propose a novel K nowledge-guided D isentangled R epresentations approach ( KDR ) to utilizing KG to guide the disentangled representation learning in recommender systems. The basic idea, is to first learn more interpretable disentangled dimensions (explicit disentangled representations) based on structural KG, and then align implicit disentangled representations learned from user-item interaction with the explicit disentangled representations. We design a novel alignment strategy based on mutual information maximization. It enables the KG information to guide the implicit disentangled representation learning, and such learned disentangled representations will correspond to semantic information derived from KG. Finally, the fused disentangled representations are optimized to improve the recommendation performance. Extensive experiments on three real-world datasets demonstrate the effectiveness of the proposed model in terms of both performance and interpretability.


2021 ◽  
Vol 2132 (1) ◽  
pp. 012035
Author(s):  
Wujun Tao ◽  
Yu Ye ◽  
Bailin Feng

Abstract There is a growing body of literature that recognizes the importance of network embedding. It intends to encode the graph structure information into a low-dimensional vector for each node in the graph, which benefits the downstream tasks. Most of recent works focus on supervised learning. But they are usually not feasible in real-world datasets owing to the high cost to obtain labels. To address this issue, we design a new unsupervised attributed network embedding method, deep attributed network embedding by mutual information maximization (DMIM). Our method focuses on maximizing mutual information between the hidden representations of the global topological structure and the node attributes, which allows us to obtain the node embedding without manual labeling. To illustrate the effectiveness of our method, we carry out the node classification task using the learned node embeddings. Compared with the state-of-the-art unsupervised methods, our method achieves superior results on various datasets.


Author(s):  
Zijing Ou ◽  
Qinliang Su ◽  
Jianxing Yu ◽  
Ruihui Zhao ◽  
Yefeng Zheng ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document