scholarly journals Deep attributed network embedding by mutual information maximization

2021 ◽  
Vol 2132 (1) ◽  
pp. 012035
Author(s):  
Wujun Tao ◽  
Yu Ye ◽  
Bailin Feng

Abstract There is a growing body of literature that recognizes the importance of network embedding. It intends to encode the graph structure information into a low-dimensional vector for each node in the graph, which benefits the downstream tasks. Most of recent works focus on supervised learning. But they are usually not feasible in real-world datasets owing to the high cost to obtain labels. To address this issue, we design a new unsupervised attributed network embedding method, deep attributed network embedding by mutual information maximization (DMIM). Our method focuses on maximizing mutual information between the hidden representations of the global topological structure and the node attributes, which allows us to obtain the node embedding without manual labeling. To illustrate the effectiveness of our method, we carry out the node classification task using the learned node embeddings. Compared with the state-of-the-art unsupervised methods, our method achieves superior results on various datasets.

Symmetry ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 115
Author(s):  
Yongjun Jing ◽  
Hao Wang ◽  
Kun Shao ◽  
Xing Huo

Trust prediction is essential to enhancing reliability and reducing risk from the unreliable node, especially for online applications in open network environments. An essential fact in trust prediction is to measure the relation of both the interacting entities accurately. However, most of the existing methods infer the trust relation between interacting entities usually rely on modeling the similarity between nodes on a graph and ignore semantic relation and the influence of negative links (e.g., distrust relation). In this paper, we proposed a relation representation learning via signed graph mutual information maximization (called SGMIM). In SGMIM, we incorporate a translation model and positive point-wise mutual information to enhance the relation representations and adopt Mutual Information Maximization to align the entity and relation semantic spaces. Moreover, we further develop a sign prediction model for making accurate trust predictions. We conduct link sign prediction in trust networks based on learned the relation representation. Extensive experimental results in four real-world datasets on trust prediction task show that SGMIM significantly outperforms state-of-the-art baseline methods.


Author(s):  
Yang Fang ◽  
Xiang Zhao ◽  
Zhen Tan

Network Embedding (NE) is an important method to learn the representations of network via a low-dimensional space. Conventional NE models focus on capturing the structure information and semantic information of vertices while neglecting such information for edges. In this work, we propose a novel NE model named BimoNet to capture both the structure and semantic information of edges. BimoNet is composed of two parts, i.e., the bi-mode embedding part and the deep neural network part. For bi-mode embedding part, the first mode named add-mode is used to express the entity-shared features of edges and the second mode named subtract-mode is employed to represent the entity-specific features of edges. These features actually reflect the semantic information. For deep neural network part, we firstly regard the edges in a network as nodes, and the vertices as links, which will not change the overall structure of the whole network. Then we take the nodes' adjacent matrix as the input of the deep neural network as it can obtain similar representations for nodes with similar structure. Afterwards, by jointly optimizing the objective function of these two parts, BimoNet could preserve both the semantic and structure information of edges. In experiments, we evaluate BimoNet on three real-world datasets and task of relation extraction, and BimoNet is demonstrated to outperform state-of-the-art baseline models consistently and significantly.


Author(s):  
Wei Wu ◽  
Bin Li ◽  
Ling Chen ◽  
Chengqi Zhang

Attributed network embedding aims to learn a low-dimensional representation for each node of a network, considering both attributes and structure information of the node. However, the learning based methods usually involve substantial cost in time, which makes them impractical without the help of a powerful workhorse. In this paper, we propose a simple yet effective algorithm, named NetHash, to solve this problem only with moderate computing capacity. NetHash employs the randomized hashing technique to encode shallow trees, each of which is rooted at a node of the network. The main idea is to efficiently encode both attributes and structure information of each node by recursively sketching the corresponding rooted tree from bottom (i.e., the predefined highest-order neighboring nodes) to top (i.e., the root node), and particularly, to preserve as much information closer to the root node as possible. Our extensive experimental results show that the proposed algorithm, which does not need learning, runs significantly faster than the state-of-the-art learning-based network embedding methods while achieving competitive or even better performance in accuracy. 


2020 ◽  
Vol 34 (04) ◽  
pp. 6949-6956
Author(s):  
Sheng Zhou ◽  
Xin Wang ◽  
Jiajun Bu ◽  
Martin Ester ◽  
Pinggang Yu ◽  
...  

Network embedding plays a crucial role in network analysis to provide effective representations for a variety of learning tasks. Existing attributed network embedding methods mainly focus on preserving the observed node attributes and network topology in the latent embedding space, with the assumption that nodes connected through edges will share similar attributes. However, our empirical analysis of real-world datasets shows that there exist both commonality and individuality between node attributes and network topology. On the one hand, similar nodes are expected to share similar attributes and have edges connecting them (commonality). On the other hand, each information source may maintain individual differences as well (individuality). Simultaneously capturing commonality and individuality is very challenging due to their exclusive nature and existing work fail to do so. In this paper, we propose a deep generative embedding (DGE) framework which simultaneously captures commonality and individuality between network topology and node attributes in a generative process. Stochastic gradient variational Bayesian (SGVB) optimization is employed to infer model parameters as well as the node embeddings. Extensive experiments on four real-world datasets show the superiority of our proposed DGE framework in various tasks including node classification and link prediction.


Author(s):  
Dongxiao He ◽  
Lu Zhai ◽  
Zhigang Li ◽  
Di Jin ◽  
Liang Yang ◽  
...  

Network embedding which is to learn a low dimensional representation of nodes in a network has been used in many network analysis tasks. Some network embedding methods, including those based on generative adversarial networks (GAN) (a promising deep learning technique), have been proposed recently. Existing GAN-based methods typically use GAN to learn a Gaussian distribution as a priori for network embedding. However, this strategy makes it difficult to distinguish the node representation from Gaussian distribution. Moreover, it does not make full use of the essential advantage of GAN (that is to adversarially learn the representation mechanism rather than the representation itself), leading to compromised performance of the method. To address this problem, we propose to use the adversarial idea on the representation mechanism, i.e. on the encoding mechanism under the framework of autoencoder. Specifically, we use the mutual information between node attributes and embedding as a reasonable alternative of this encoding mechanism (which is much easier to track). Additionally, we introduce another mapping mechanism (which is based on GAN) as a competitor into the adversarial learning system. A range of empirical results demonstrate the effectiveness of the proposed approach.


Author(s):  
Hongchang Gao ◽  
Heng Huang

Network embedding has attracted a surge of attention in recent years. It is to learn the low-dimensional representation for nodes in a network, which benefits downstream tasks such as node classification and link prediction. Most of the existing approaches learn node representations only based on the topological structure, yet nodes are often associated with rich attributes in many real-world applications. Thus, it is important and necessary to learn node representations based on both the topological structure and node attributes. In this paper, we propose a novel deep attributed network embedding approach, which can capture the high non-linearity and preserve various proximities in both topological structure and node attributes. At the same time, a novel strategy is proposed to guarantee the learned node representation can encode the consistent and complementary information from the topological structure and node attributes. Extensive experiments on benchmark datasets have verified the effectiveness of our proposed approach.


Sign in / Sign up

Export Citation Format

Share Document