scholarly journals Hierarchical Hyperedge Embedding-Based Representation Learning for Group Recommendation

2022 ◽  
Vol 40 (1) ◽  
pp. 1-27
Author(s):  
Lei Guo ◽  
Hongzhi Yin ◽  
Tong Chen ◽  
Xiangliang Zhang ◽  
Kai Zheng

Group recommendation aims to recommend items to a group of users. In this work, we study group recommendation in a particular scenario, namely occasional group recommendation, where groups are formed ad hoc and users may just constitute a group for the first time—that is, the historical group-item interaction records are highly limited. Most state-of-the-art works have addressed the challenge by aggregating group members’ personal preferences to learn the group representation. However, the representation learning for a group is most complex beyond the aggregation or fusion of group member representation, as the personal preferences and group preferences may be in different spaces and even orthogonal. In addition, the learned user representation is not accurate due to the sparsity of users’ interaction data. Moreover, the group similarity in terms of common group members has been overlooked, which, however, has the great potential to improve the group representation learning. In this work, we focus on addressing the aforementioned challenges in the group representation learning task, and devise a hierarchical hyperedge embedding-based group recommender, namely HyperGroup. Specifically, we propose to leverage the user-user interactions to alleviate the sparsity issue of user-item interactions, and design a graph neural network-based representation learning network to enhance the learning of individuals’ preferences from their friends’ preferences, which provides a solid foundation for learning groups’ preferences. To exploit the group similarity (i.e., overlapping relationships among groups) to learn a more accurate group representation from highly limited group-item interactions, we connect all groups as a network of overlapping sets (a.k.a. hypergraph), and treat the task of group preference learning as embedding hyperedges (i.e., user sets/groups) in a hypergraph, where an inductive hyperedge embedding method is proposed. To further enhance the group-level preference modeling, we develop a joint training strategy to learn both user-item and group-item interactions in the same process. We conduct extensive experiments on two real-world datasets, and the experimental results demonstrate the superiority of our proposed HyperGroup in comparison to the state-of-the-art baselines.

Author(s):  
Jing Huang ◽  
Jie Yang

Hypergraph, an expressive structure with flexibility to model the higher-order correlations among entities, has recently attracted increasing attention from various research domains. Despite the success of Graph Neural Networks (GNNs) for graph representation learning, how to adapt the powerful GNN-variants directly into hypergraphs remains a challenging problem. In this paper, we propose UniGNN, a unified framework for interpreting the message passing process in graph and hypergraph neural networks, which can generalize general GNN models into hypergraphs. In this framework, meticulously-designed architectures aiming to deepen GNNs can also be incorporated into hypergraphs with the least effort. Extensive experiments have been conducted to demonstrate the effectiveness of UniGNN on multiple real-world datasets, which outperform the state-of-the-art approaches with a large margin. Especially for the DBLP dataset, we increase the accuracy from 77.4% to 88.8% in the semi-supervised hypernode classification task. We further prove that the proposed message-passing based UniGNN models are at most as powerful as the 1-dimensional Generalized Weisfeiler-Leman (1-GWL) algorithm in terms of distinguishing non-isomorphic hypergraphs. Our code is available at https://github.com/OneForward/UniGNN.


Symmetry ◽  
2019 ◽  
Vol 11 (9) ◽  
pp. 1149
Author(s):  
Thapana Boonchoo ◽  
Xiang Ao ◽  
Qing He

Motivated by the proliferation of trajectory data produced by advanced GPS-enabled devices, trajectory is gaining in complexity and beginning to embroil additional attributes beyond simply the coordinates. As a consequence, this creates the potential to define the similarity between two attribute-aware trajectories. However, most existing trajectory similarity approaches focus only on location based proximities and fail to capture the semantic similarities encompassed by these additional asymmetric attributes (aspects) of trajectories. In this paper, we propose multi-aspect embedding for attribute-aware trajectories (MAEAT), a representation learning approach for trajectories that simultaneously models the similarities according to their multiple aspects. MAEAT is built upon a sentence embedding algorithm and directly learns whole trajectory embedding via predicting the context aspect tokens when given a trajectory. Two kinds of token generation methods are proposed to extract multiple aspects from the raw trajectories, and a regularization is devised to control the importance among aspects. Extensive experiments on the benchmark and real-world datasets show the effectiveness and efficiency of the proposed MAEAT compared to the state-of-the-art and baseline methods. The results of MAEAT can well support representative downstream trajectory mining and management tasks, and the algorithm outperforms other compared methods in execution time by at least two orders of magnitude.


Author(s):  
Zhu Sun ◽  
Jie Yang ◽  
Jie Zhang ◽  
Alessandro Bozzon ◽  
Yu Chen ◽  
...  

Representation learning (RL) has recently proven to be effective in capturing local item relationships by modeling item co-occurrence in individual user's interaction record. However, the value of RL for recommendation has not reached the full potential due to two major drawbacks: 1) recommendation is modeled as a rating prediction problem but should essentially be a personalized ranking one; 2) multi-level organizations of items are neglected for fine-grained item relationships. We design a unified Bayesian framework MRLR to learn user and item embeddings from a multi-level item organization, thus benefiting from RL as well as achieving the goal of personalized ranking. Extensive validation on real-world datasets shows that MRLR consistently outperforms state-of-the-art algorithms.


Author(s):  
Zejian Li ◽  
Yongchuan Tang ◽  
Yongxing He

Learning the disentangled representation of interpretable generative factors of data is one of the foundations to allow artificial intelligence to think like people. In this paper, we propose the analogical training strategy for the unsupervised disentangled representation learning in generative models. The analogy is one of the typical cognitive processes, and our proposed strategy is based on the observation that sample pairs in which one is different from the other in one specific generative factor show the same analogical relation. Thus, the generator is trained to generate sample pairs from which a designed classifier can identify the underlying analogical relation. In addition, we propose a disentanglement metric called the subspace score, which is inspired by subspace learning methods and does not require supervised information. Experiments show that our proposed training strategy allows the generative models to find the disentangled factors, and that our methods can give competitive performances as compared with the state-of-the-art methods.


Author(s):  
Ming Jin ◽  
Yizhen Zheng ◽  
Yuan-Fang Li ◽  
Chen Gong ◽  
Chuan Zhou ◽  
...  

Graph representation learning plays a vital role in processing graph-structured data. However, prior arts on graph representation learning heavily rely on labeling information. To overcome this problem, inspired by the recent success of graph contrastive learning and Siamese networks in visual representation learning, we propose a novel self-supervised approach in this paper to learn node representations by enhancing Siamese self-distillation with multi-scale contrastive learning. Specifically, we first generate two augmented views from the input graph based on local and global perspectives. Then, we employ two objectives called cross-view and cross-network contrastiveness to maximize the agreement between node representations across different views and networks. To demonstrate the effectiveness of our approach, we perform empirical experiments on five real-world datasets. Our method not only achieves new state-of-the-art results but also surpasses some semi-supervised counterparts by large margins. Code is made available at https://github.com/GRAND-Lab/MERIT


Author(s):  
Hu Wang ◽  
Guansong Pang ◽  
Chunhua Shen ◽  
Congbo Ma

Deep neural networks have gained great success in a broad range of tasks due to its remarkable capability to learn semantically rich features from high-dimensional data. However, they often require large-scale labelled data to successfully learn such features, which significantly hinders their adaption in unsupervised learning tasks, such as anomaly detection and clustering, and limits their applications to critical domains where obtaining massive labelled data is prohibitively expensive. To enable unsupervised learning on those domains, in this work we propose to learn features without using any labelled data by training neural networks to predict data distances in a randomly projected space. Random mapping is a theoretically proven approach to obtain approximately preserved distances. To well predict these distances, the representation learner is optimised to learn genuine class structures that are implicitly embedded in the randomly projected space. Empirical results on 19 real-world datasets show that our learned representations substantially outperform a few state-of-the-art methods for both anomaly detection and clustering tasks. Code is available at: \url{https://git.io/RDP}


2021 ◽  
Vol 145 ◽  
pp. 74-80
Author(s):  
Peipei Wang ◽  
Lin Li ◽  
Ru Wang ◽  
Guandong Xu ◽  
Jianwei Zhang

Sign in / Sign up

Export Citation Format

Share Document