NANE: Attributed Network Embedding with Local and Global Information

Author(s):  
Jingjie Mo ◽  
Neng Gao ◽  
Yujing Zhou ◽  
Yang Pei ◽  
Jiong Wang
2021 ◽  
Vol 11 (5) ◽  
pp. 2371
Author(s):  
Junjian Zhan ◽  
Feng Li ◽  
Yang Wang ◽  
Daoyu Lin ◽  
Guangluan Xu

As most networks come with some content in each node, attributed network embedding has aroused much research interest. Most existing attributed network embedding methods aim at learning a fixed representation for each node encoding its local proximity. However, those methods usually neglect the global information between nodes distant from each other and distribution of the latent codes. We propose Structural Adversarial Variational Graph Auto-Encoder (SAVGAE), a novel framework which encodes the network structure and node content into low-dimensional embeddings. On one hand, our model captures the local proximity and proximities at any distance of a network by exploiting a high-order proximity indicator named Rooted Pagerank. On the other hand, our method learns the data distribution of each node representation while circumvents the side effect its sampling process causes on learning a robust embedding through adversarial training. On benchmark datasets, we demonstrate that our method performs competitively compared with state-of-the-art models.


2021 ◽  
Vol 232 ◽  
pp. 107448
Author(s):  
Darong Lai ◽  
Sheng Wang ◽  
Zhihong Chong ◽  
Weiwei Wu ◽  
Christine Nardini

2018 ◽  
Vol 19 (1) ◽  
Author(s):  
Bo Xu ◽  
Kun Li ◽  
Wei Zheng ◽  
Xiaoxia Liu ◽  
Yijia Zhang ◽  
...  

Author(s):  
Juan-Hui Li ◽  
Chang-Dong Wang ◽  
Ling Huang ◽  
Dong Huang ◽  
Jian-Huang Lai ◽  
...  

2022 ◽  
Vol 40 (3) ◽  
pp. 1-36
Author(s):  
Jinyuan Fang ◽  
Shangsong Liang ◽  
Zaiqiao Meng ◽  
Maarten De Rijke

Network-based information has been widely explored and exploited in the information retrieval literature. Attributed networks, consisting of nodes, edges as well as attributes describing properties of nodes, are a basic type of network-based data, and are especially useful for many applications. Examples include user profiling in social networks and item recommendation in user-item purchase networks. Learning useful and expressive representations of entities in attributed networks can provide more effective building blocks to down-stream network-based tasks such as link prediction and attribute inference. Practically, input features of attributed networks are normalized as unit directional vectors. However, most network embedding techniques ignore the spherical nature of inputs and focus on learning representations in a Gaussian or Euclidean space, which, we hypothesize, might lead to less effective representations. To obtain more effective representations of attributed networks, we investigate the problem of mapping an attributed network with unit normalized directional features into a non-Gaussian and non-Euclidean space. Specifically, we propose a hyperspherical variational co-embedding for attributed networks (HCAN), which is based on generalized variational auto-encoders for heterogeneous data with multiple types of entities. HCAN jointly learns latent embeddings for both nodes and attributes in a unified hyperspherical space such that the affinities between nodes and attributes can be captured effectively. We argue that this is a crucial feature in many real-world applications of attributed networks. Previous Gaussian network embedding algorithms break the assumption of uninformative prior, which leads to unstable results and poor performance. In contrast, HCAN embeds nodes and attributes as von Mises-Fisher distributions, and allows one to capture the uncertainty of the inferred representations. Experimental results on eight datasets show that HCAN yields better performance in a number of applications compared with nine state-of-the-art baselines.


2020 ◽  
Vol 409 ◽  
pp. 231-243
Author(s):  
Chengbin Hou ◽  
Shan He ◽  
Ke Tang

Sign in / Sign up

Export Citation Format

Share Document