Semi-Supervised Learning With Graph Learning-Convolutional Networks

Author(s):  
Bo Jiang ◽  
Ziyan Zhang ◽  
Doudou Lin ◽  
Jin Tang ◽  
Bin Luo
Author(s):  
Bingbing Xu ◽  
Huawei Shen ◽  
Qi Cao ◽  
Keting Cen ◽  
Xueqi Cheng

Graph convolutional networks gain remarkable success in semi-supervised learning on graph-structured data. The key to graph-based semisupervised learning is capturing the smoothness of labels or features over nodes exerted by graph structure. Previous methods, spectral methods and spatial methods, devote to defining graph convolution as a weighted average over neighboring nodes, and then learn graph convolution kernels to leverage the smoothness to improve the performance of graph-based semi-supervised learning. One open challenge is how to determine appropriate neighborhood that reflects relevant information of smoothness manifested in graph structure. In this paper, we propose GraphHeat, leveraging heat kernel to enhance low-frequency filters and enforce smoothness in the signal variation on the graph. GraphHeat leverages the local structure of target node under heat diffusion to determine its neighboring nodes flexibly, without the constraint of order suffered by previous methods. GraphHeat achieves state-of-the-art results in the task of graph-based semi-supervised classification across three benchmark datasets: Cora, Citeseer and Pubmed.


2020 ◽  
Vol 34 (04) ◽  
pp. 4215-4222
Author(s):  
Binyuan Hui ◽  
Pengfei Zhu ◽  
Qinghua Hu

Graph convolutional networks (GCN) have achieved promising performance in attributed graph clustering and semi-supervised node classification because it is capable of modeling complex graphical structure, and jointly learning both features and relations of nodes. Inspired by the success of unsupervised learning in the training of deep models, we wonder whether graph-based unsupervised learning can collaboratively boost the performance of semi-supervised learning. In this paper, we propose a multi-task graph learning model, called collaborative graph convolutional networks (CGCN). CGCN is composed of an attributed graph clustering network and a semi-supervised node classification network. As Gaussian mixture models can effectively discover the inherent complex data distributions, a new end to end attributed graph clustering network is designed by combining variational graph auto-encoder with Gaussian mixture models (GMM-VGAE) rather than the classic k-means. If the pseudo-label of an unlabeled sample assigned by GMM-VGAE is consistent with the prediction of the semi-supervised GCN, it is selected to further boost the performance of semi-supervised learning with the help of the pseudo-labels. Extensive experiments on benchmark graph datasets validate the superiority of our proposed GMM-VGAE compared with the state-of-the-art attributed graph clustering networks. The performance of node classification is greatly improved by our proposed CGCN, which verifies graph-based unsupervised learning can be well exploited to enhance the performance of semi-supervised learning.


2020 ◽  
Vol 34 (04) ◽  
pp. 5892-5899
Author(s):  
Ke Sun ◽  
Zhouchen Lin ◽  
Zhanxing Zhu

Graph Convolutional Networks (GCNs) play a crucial role in graph learning tasks, however, learning graph embedding with few supervised signals is still a difficult problem. In this paper, we propose a novel training algorithm for Graph Convolutional Network, called Multi-Stage Self-Supervised (M3S) Training Algorithm, combined with self-supervised learning approach, focusing on improving the generalization performance of GCNs on graphs with few labeled nodes. Firstly, a Multi-Stage Training Framework is provided as the basis of M3S training method. Then we leverage DeepCluster technique, a popular form of self-supervised learning, and design corresponding aligning mechanism on the embedding space to refine the Multi-Stage Training Framework, resulting in M3S Training Algorithm. Finally, extensive experimental results verify the superior performance of our algorithm on graphs with few labeled nodes under different label rates compared with other state-of-the-art approaches.


Author(s):  
Sichao Fu ◽  
Weifeng Liu ◽  
Weili Guan ◽  
Yicong Zhou ◽  
Dapeng Tao ◽  
...  

Over the past few years, graph representation learning (GRL) has received widespread attention on the feature representations of the non-Euclidean data. As a typical model of GRL, graph convolutional networks (GCN) fuse the graph Laplacian-based static sample structural information. GCN thus generalizes convolutional neural networks to acquire the sample representations with the variously high-order structures. However, most of existing GCN-based variants depend on the static data structural relationships. It will result in the extracted data features lacking of representativeness during the convolution process. To solve this problem, dynamic graph learning convolutional networks (DGLCN) on the application of semi-supervised classification are proposed. First, we introduce a definition of dynamic spectral graph convolution operation. It constantly optimizes the high-order structural relationships between data points according to the loss values of the loss function, and then fits the local geometry information of data exactly. After optimizing our proposed definition with the one-order Chebyshev polynomial, we can obtain a single-layer convolution rule of DGLCN. Due to the fusion of the optimized structural information in the learning process, multi-layer DGLCN can extract richer sample features to improve classification performance. Substantial experiments are conducted on citation network datasets to prove the effectiveness of DGLCN. Experiment results demonstrate that the proposed DGLCN obtains a superior classification performance compared to several existing semi-supervised classification models.


Sign in / Sign up

Export Citation Format

Share Document