scholarly journals Enhanced Unsupervised Graph Embedding via Hierarchical Graph Convolution Network

2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
H. Zhang ◽  
J. J. Zhou ◽  
R. Li

Graph embedding aims to learn the low-dimensional representation of nodes in the network, which has been paid more and more attention in many graph-based tasks recently. Graph Convolution Network (GCN) is a typical deep semisupervised graph embedding model, which can acquire node representation from the complex network. However, GCN usually needs to use a lot of labeled data and additional expressive features in the graph embedding learning process, so the model cannot be effectively applied to undirected graphs with only network structure information. In this paper, we propose a novel unsupervised graph embedding method via hierarchical graph convolution network (HGCN). Firstly, HGCN builds the initial node embedding and pseudo-labels for the undirected graphs, and then further uses GCNs to learn the node embedding and update labels, finally combines HGCN output representation with the initial embedding to get the graph embedding. Furthermore, we improve the model to match the different undirected networks according to the number of network node label types. Comprehensive experiments demonstrate that our proposed HGCN and HGCN∗ can significantly enhance the performance of the node classification task.

2020 ◽  
Vol 8 (2) ◽  
Author(s):  
Leo Torres ◽  
Kevin S Chan ◽  
Tina Eliassi-Rad

Abstract Graph embedding seeks to build a low-dimensional representation of a graph $G$. This low-dimensional representation is then used for various downstream tasks. One popular approach is Laplacian Eigenmaps (LE), which constructs a graph embedding based on the spectral properties of the Laplacian matrix of $G$. The intuition behind it, and many other embedding techniques, is that the embedding of a graph must respect node similarity: similar nodes must have embeddings that are close to one another. Here, we dispose of this distance-minimization assumption. Instead, we use the Laplacian matrix to find an embedding with geometric properties instead of spectral ones, by leveraging the so-called simplex geometry of $G$. We introduce a new approach, Geometric Laplacian Eigenmap Embedding, and demonstrate that it outperforms various other techniques (including LE) in the tasks of graph reconstruction and link prediction.


Author(s):  
Jing Qian ◽  
Gangmin Li ◽  
Katie Atkinson ◽  
Yong Yue

Knowledge graph embedding (KGE) is to project entities and relations of a knowledge graph (KG) into a low-dimensional vector space, which has made steady progress in recent years. Conventional KGE methods, especially translational distance-based models, are trained through discriminating positive samples from negative ones. Most KGs store only positive samples for space efficiency. Negative sampling thus plays a crucial role in encoding triples of a KG. The quality of generated negative samples has a direct impact on the performance of learnt knowledge representation in a myriad of downstream tasks, such as recommendation, link prediction and node classification. We summarize current negative sampling approaches in KGE into three categories, static distribution-based, dynamic distribution-based and custom cluster-based respectively. Based on this categorization we discuss the most prevalent existing approaches and their characteristics. It is a hope that this review can provide some guidelines for new thoughts about negative sampling in KGE.


2020 ◽  
Vol 12 (11) ◽  
pp. 1738
Author(s):  
Xiayuan Huang ◽  
Xiangli Nie ◽  
Hong Qiao

Dimensionality reduction (DR) methods based on graph embedding are widely used for feature extraction. For these methods, the weighted graph plays a vital role in the process of DR because it can characterize the data’s structure information. Moreover, the similarity measurement is a crucial factor for constructing a weighted graph. Wishart distance of covariance matrices and Euclidean distance of polarimetric features are two important similarity measurements for polarimetric synthetic aperture radar (PolSAR) image classification. For obtaining a satisfactory PolSAR image classification performance, a co-regularized graph embedding (CRGE) method by combing the two distances is proposed for PolSAR image feature extraction in this paper. Firstly, two weighted graphs are constructed based on the two distances to represent the data’s local structure information. Specifically, the neighbouring samples are sought in a local patch to decrease computation cost and use spatial information. Next the DR model is constructed based on the two weighted graphs and co-regularization. The co-regularization aims to minimize the dissimilarity of low-dimensional features corresponding to two weighted graphs. We employ two types of co-regularization and the corresponding algorithms are proposed. Ultimately, the obtained low-dimensional features are used for PolSAR image classification. Experiments are implemented on three PolSAR datasets and results show that the co-regularized graph embedding can enhance the performance of PolSAR image classification.


Author(s):  
Wei Wu ◽  
Bin Li ◽  
Ling Chen ◽  
Chengqi Zhang

Attributed network embedding aims to learn a low-dimensional representation for each node of a network, considering both attributes and structure information of the node. However, the learning based methods usually involve substantial cost in time, which makes them impractical without the help of a powerful workhorse. In this paper, we propose a simple yet effective algorithm, named NetHash, to solve this problem only with moderate computing capacity. NetHash employs the randomized hashing technique to encode shallow trees, each of which is rooted at a node of the network. The main idea is to efficiently encode both attributes and structure information of each node by recursively sketching the corresponding rooted tree from bottom (i.e., the predefined highest-order neighboring nodes) to top (i.e., the root node), and particularly, to preserve as much information closer to the root node as possible. Our extensive experimental results show that the proposed algorithm, which does not need learning, runs significantly faster than the state-of-the-art learning-based network embedding methods while achieving competitive or even better performance in accuracy. 


2021 ◽  
Author(s):  
Liangchen Hu

As one of the ways to acquire efficient image compact representation, graph embedding (GE) based manifold learning has been widely developed over the last two decades. Good graph embedding depends on the construction of graphs concerning intra-class compactness and inter-class separability, which are crucial indicators of the effectiveness of a model in generating discriminative features. Unsupervised approaches are intended to reveal the data structure information from a local or global perspective, but the resulting compact representation often has poorly inter-class margins due to the lack of label information. Moreover, supervised techniques only consider enhancing the adjacency affinity within the class but excluding the affinity of different classes, which results in the inability to fully capture the marginal structure between distributions of different classes. To overcome these issues, we propose a learning framework that implements Category-Oriented Self-Learning Graph Embedding (COSLGE), in which we achieve a flexible low-dimensional compact representation by imposing an adaptive graph learning process across the entire data while examining the inter-class separability of low-dimensional embedding by jointly learning a linear classifier. Besides, our framework can easily be extended to the semi-supervised situation. Extensive experiments on several widely-used benchmark databases demonstrate the effectiveness of the proposed method comparing with some state-of-the-art approaches.


2021 ◽  
Author(s):  
Liangchen Hu

As one of the ways to acquire efficient image compact representation, graph embedding (GE) based manifold learning has been widely developed over the last two decades. Good graph embedding depends on the construction of graphs concerning intra-class compactness and inter-class separability, which are crucial indicators of the effectiveness of a model in generating discriminative features. Unsupervised approaches are intended to reveal the data structure information from a local or global perspective, but the resulting compact representation often has poorly inter-class margins due to the lack of label information. Moreover, supervised techniques only consider enhancing the adjacency affinity within the class but excluding the affinity of different classes, which results in the inability to fully capture the marginal structure between distributions of different classes. To overcome these issues, we propose a learning framework that implements Category-Oriented Self-Learning Graph Embedding (COSLGE), in which we achieve a flexible low-dimensional compact representation by imposing an adaptive graph learning process across the entire data while examining the inter-class separability of low-dimensional embedding by jointly learning a linear classifier. Besides, our framework can easily be extended to the semi-supervised situation. Extensive experiments on several widely-used benchmark databases demonstrate the effectiveness of the proposed method comparing with some state-of-the-art approaches.


2020 ◽  
Author(s):  
Mounir HADDAD ◽  
Cécile BOTHOREL ◽  
Philippe LENCA ◽  
Dominique BEDART

Abstract The goal of graph embedding is to learn a representation of graphs vertices in a latent low-dimensional space in order to encode the structural information that lies in graphs. While real-world networks evolve over time, the majority of research focuses on static networks, ignoring local and global evolution patterns. A simplistic approach consists of learning nodes embeddings independently for each time step. This can cause unstable and inefficient representations over time. In this paper, we present TemporalNode2vec, a novel dynamic graph embedding approach that learns continuous time-aware node representations. Overall, we demonstrate that our method improves node classification tasks comparing to previous static and dynamic approaches as it achieves up to 14% gain regarding the F1 score metric. We also prove that our model is more data-efficient than several baseline methods, as it affords to achieve good performances with a limited number of node representation features. Moreover, we develop and evaluate a task-specific variant of our method called TsTemporalNode2vec, aiming to improve the performances and the data-efficiency of node classification tasks.


Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1407
Author(s):  
Peng Wang ◽  
Jing Zhou ◽  
Yuzhang Liu ◽  
Xingchen Zhou

Knowledge graph embedding aims to embed entities and relations into low-dimensional vector spaces. Most existing methods only focus on triple facts in knowledge graphs. In addition, models based on translation or distance measurement cannot fully represent complex relations. As well-constructed prior knowledge, entity types can be employed to learn the representations of entities and relations. In this paper, we propose a novel knowledge graph embedding model named TransET, which takes advantage of entity types to learn more semantic features. More specifically, circle convolution based on the embeddings of entity and entity types is utilized to map head entity and tail entity to type-specific representations, then translation-based score function is used to learn the presentation triples. We evaluated our model on real-world datasets with two benchmark tasks of link prediction and triple classification. Experimental results demonstrate that it outperforms state-of-the-art models in most cases.


Sign in / Sign up

Export Citation Format

Share Document