scholarly journals Pseudoinverse graph convolutional networks

Author(s):  
Dominik Alfke ◽  
Martin Stoll

AbstractGraph Convolutional Networks (GCNs) have proven to be successful tools for semi-supervised classification on graph-based datasets. We propose a new GCN variant whose three-part filter space is targeted at dense graphs. Our examples include graphs generated from 3D point clouds with an increased focus on non-local information, as well as hypergraphs based on categorical data of real-world problems. These graphs differ from the common sparse benchmark graphs in terms of the spectral properties of their graph Laplacian. Most notably we observe large eigengaps, which are unfavorable for popular existing GCN architectures. Our method overcomes these issues by utilizing the pseudoinverse of the Laplacian. Another key ingredient is a low-rank approximation of the convolutional matrix, ensuring computational efficiency and increasing accuracy at the same time. We outline how the necessary eigeninformation can be computed efficiently in each applications and discuss the appropriate choice of the only metaparameter, the approximation rank. We finally showcase our method’s performance regarding runtime and accuracy in various experiments with real-world datasets.

Author(s):  
Xunpeng Huang ◽  
Le Wu ◽  
Enhong Chen ◽  
Hengshu Zhu ◽  
Qi Liu ◽  
...  

Matrix Factorization (MF) is among the most widely used techniques for collaborative filtering based recommendation. Along this line, a critical demand is to incrementally refine the MF models when new ratings come in an online scenario. However, most of existing incremental MF algorithms are limited by specific MF models or strict use restrictions. In this paper, we propose a general incremental MF framework by designing a linear transformation of user and item latent vectors over time. This framework shows a relatively high accuracy with a computation and space efficient training process in an online scenario. Meanwhile, we explain the framework with a low-rank approximation perspective, and give an upper bound on the training error when this framework is used for incremental learning in some special cases. Finally, extensive experimental results on two real-world datasets clearly validate the effectiveness, efficiency and storage performance of the proposed framework.


Author(s):  
Bin Zhao ◽  
Johannes R. Sveinsson ◽  
Magnus O. Ulfarsson ◽  
Jocelyn Chanussot

Author(s):  
Yijing Luo ◽  
Bo Han ◽  
Chen Gong

Practically, we often face the dilemma that some of the examples for training a classifier are incorrectly labeled due to various subjective and objective factors. Although intensive efforts have been put to design classifiers that are robust to label noise, most of the previous methods have not fully utilized data distribution information. To address this issue, this paper introduces a bi-level learning paradigm termed “Spectral Cluster Discovery'' (SCD) for combating with noisy labels. Namely, we simultaneously learn a robust classifier (Learning stage) by discovering the low-rank approximation to the ground-truth label matrix and learn an ideal affinity graph (Clustering stage). Specifically, we use the learned classifier to assign the examples with similar label to a mutual cluster. Based on the cluster membership, we utilize the learned affinity graph to explore the noisy examples based on the cluster membership. Both stages will reinforce each other iteratively. Experimental results on typical benchmark and real-world datasets verify the superiority of SCD to other label noise learning methods.


2016 ◽  
Vol 25 (3) ◽  
pp. 1340-1353 ◽  
Author(s):  
Victor May ◽  
Yosi Keller ◽  
Nir Sharon ◽  
Yoel Shkolnisky

2020 ◽  
Vol 14 (12) ◽  
pp. 2791-2798
Author(s):  
Xiaoqun Qiu ◽  
Zhen Chen ◽  
Saifullah Adnan ◽  
Hongwei He

Sign in / Sign up

Export Citation Format

Share Document