scholarly journals Blocking-based Neighbor Sampling for Large-scale Graph Neural Networks

Author(s):  
Kai-Lang Yao ◽  
Wu-Jun Li

The exponential increase in computation and memory complexity with the depth of network has become the main impediment to the successful application of graph neural networks (GNNs) on large-scale graphs like graphs with hundreds of millions of nodes. In this paper, we propose a novel neighbor sampling strategy, dubbed blocking-based neighbor sampling (BNS), for efficient training of GNNs on large-scale graphs. Specifically, BNS adopts a policy to stochastically block the ongoing expansion of neighboring nodes, which can reduce the rate of the exponential increase in computation and memory complexity of GNNs. Furthermore, a reweighted policy is applied to graph convolution, to adjust the contribution of blocked and non-blocked neighbors to central nodes. We theoretically prove that BNS provides an unbiased estimation for the original graph convolution operation. Extensive experiments on three benchmark datasets show that, on large-scale graphs, BNS is 2X~5X faster than state-of-the-art methods when achieving the same accuracy. Moreover, even on the small-scale graphs, BNS also demonstrates the advantage of low time cost.

2021 ◽  
Vol 40 (3) ◽  
pp. 1-13
Author(s):  
Lumin Yang ◽  
Jiajie Zhuang ◽  
Hongbo Fu ◽  
Xiangzhi Wei ◽  
Kun Zhou ◽  
...  

We introduce SketchGNN , a convolutional graph neural network for semantic segmentation and labeling of freehand vector sketches. We treat an input stroke-based sketch as a graph with nodes representing the sampled points along input strokes and edges encoding the stroke structure information. To predict the per-node labels, our SketchGNN uses graph convolution and a static-dynamic branching network architecture to extract the features at three levels, i.e., point-level, stroke-level, and sketch-level. SketchGNN significantly improves the accuracy of the state-of-the-art methods for semantic sketch segmentation (by 11.2% in the pixel-based metric and 18.2% in the component-based metric over a large-scale challenging SPG dataset) and has magnitudes fewer parameters than both image-based and sequence-based methods.


2020 ◽  
Vol 53 (2) ◽  
pp. 2634-2641
Author(s):  
Vinicius Lima ◽  
Mark Eisen ◽  
Konstatinos Gatsis ◽  
Alejandro Ribeiro

2019 ◽  
Vol 10 (15) ◽  
pp. 4129-4140 ◽  
Author(s):  
Kyle Mills ◽  
Kevin Ryczko ◽  
Iryna Luchak ◽  
Adam Domurad ◽  
Chris Beeler ◽  
...  

We present a physically-motivated topology of a deep neural network that can efficiently infer extensive parameters (such as energy, entropy, or number of particles) of arbitrarily large systems, doing so with scaling.


Electronics ◽  
2019 ◽  
Vol 8 (10) ◽  
pp. 1072 ◽  
Author(s):  
Shifeng Xia ◽  
Jiexian Zeng ◽  
Lu Leng ◽  
Xiang Fu

Recently, convolutional neural networks (CNNs) have achieved great success in scene recognition. Compared with traditional hand-crafted features, CNN can be used to extract more robust and generalized features for scene recognition. However, the existing scene recognition methods based on CNN do not sufficiently take into account the relationship between image regions and categories when choosing local regions, which results in many redundant local regions and degrades recognition accuracy. In this paper, we propose an effective method for exploring discriminative regions of the scene image. Our method utilizes the gradient-weighted class activation mapping (Grad-CAM) technique and weakly supervised information to generate the attention map (AM) of scene images, dubbed WS-AM—weakly supervised attention map. The regions, where the local mean and the local center value are both large in the AM, correspond to the discriminative regions helpful for scene recognition. We sampled discriminative regions on multiple scales and extracted the features of large-scale and small-scale regions with two different pre-trained CNNs, respectively. The features from two different scales were aggregated by the improved vector of locally aggregated descriptor (VLAD) coding and max pooling, respectively. Finally, the pre-trained CNN was used to extract the global feature of the image in the fully- connected (fc) layer, and the local features were combined with the global feature to obtain the image representation. We validated the effectiveness of our method on three benchmark datasets: MIT Indoor 67, Scene 15, and UIUC Sports, and obtained 85.67%, 94.80%, and 95.12% accuracy, respectively. Compared with some state-of-the-art methods, the WS-AM method requires fewer local regions, so it has a better real-time performance.


2018 ◽  
Vol 2018 ◽  
pp. 1-15 ◽  
Author(s):  
Qi Zhao ◽  
Shuchang Lyu ◽  
Boxue Zhang ◽  
Wenquan Feng

Convolutional neural networks (CNNs) are becoming more and more popular today. CNNs now have become a popular feature extractor applying to image processing, big data processing, fog computing, etc. CNNs usually consist of several basic units like convolutional unit, pooling unit, activation unit, and so on. In CNNs, conventional pooling methods refer to 2×2 max-pooling and average-pooling, which are applied after the convolutional or ReLU layers. In this paper, we propose a Multiactivation Pooling (MAP) Method to make the CNNs more accurate on classification tasks without increasing depth and trainable parameters. We add more convolutional layers before one pooling layer and expand the pooling region to 4×4, 8×8, 16×16, and even larger. When doing large-scale subsampling, we pick top-k activation, sum up them, and constrain them by a hyperparameter σ. We pick VGG, ALL-CNN, and DenseNets as our baseline models and evaluate our proposed MAP method on benchmark datasets: CIFAR-10, CIFAR-100, SVHN, and ImageNet. The classification results are competitive.


Author(s):  
Kaidi Xu ◽  
Hongge Chen ◽  
Sijia Liu ◽  
Pin-Yu Chen ◽  
Tsui-Wei Weng ◽  
...  

Graph neural networks (GNNs) which apply the deep neural networks to graph data have achieved significant performance for the task of semi-supervised node classification. However, only few work has addressed the adversarial robustness of GNNs. In this paper, we first present a novel gradient-based attack method that facilitates the difficulty of tackling discrete graph data. When comparing to current adversarial attacks on GNNs, the results show that by only perturbing a small number of edge perturbations, including addition and deletion, our optimization-based attack can lead to a noticeable decrease in classification performance. Moreover, leveraging our gradient-based attack, we propose the first optimization-based adversarial training for GNNs. Our method yields higher robustness against both different gradient based and greedy attack methods without sacrifice classification accuracy on original graph.


Author(s):  
Jinlong Du ◽  
Senzhang Wang ◽  
Hao Miao ◽  
Jiaqiang Zhang

Graph pooling is a critical operation to downsample a graph in graph neural networks. Existing coarsening pooling methods (e.g. DiffPool) mostly focus on capturing the global topology structure by assigning the nodes into several coarse clusters, while dropping pooling methods (e.g. SAGPool) try to preserve the local topology structure by selecting the top-k representative nodes. However, there lacks an effective method to integrate the two types of methods so that both the local and the global topology structure of a graph can be well captured. To address this issue, we propose a Multi-channel Graph Pooling method named MuchPool, which captures the local structure, the global structure, and node feature simultaneously in graph pooling. Specifically, we use two channels to conduct dropping pooling based on the local topology and node features respectively, and one channel to conduct coarsening pooling. Then a cross-channel convolution operation is designed to refine the graph representations of different channels. Finally, the pooling results are aggregated as the final pooled graph. Extensive experiments on six benchmark datasets present the superior performance of MuchPool. The code of this work is publicly available at Github.


Buildings ◽  
2018 ◽  
Vol 8 (11) ◽  
pp. 151 ◽  
Author(s):  
João Estêvão

The selection of a given method for the seismic vulnerability assessment of buildings is mostly dependent on the scale of the analysis. Results obtained in large-scale studies are usually less accurate than the ones obtained in small-scale studies. In this paper a study about the feasibility of using Artificial Neural Networks (ANNs) to carry out fast and accurate large-scale seismic vulnerability studies has been presented. In the proposed approach, an ANN was used to obtain a simplified capacity curve of a building typology, in order to use the N2 method to assess the structural seismic behaviour, as presented in the Annex B of the Eurocode 8. Aiming to study the accuracy of the proposed approach, two ANNs with equal architectures were trained with a different number of vectors, trying to evaluate the ANN capacity to achieve good results in domains of the problem which are not well represented by the training vectors. The case study presented in this work allowed the conclusion that the ANN precision is very dependent on the amount of data used to train the ANN and demonstrated that it is possible to use ANN to obtain simplified capacity curves for seismic assessment purposes with high precision.


2021 ◽  
Author(s):  
Zongtao Liu ◽  
Bin Ma ◽  
Quan Liu ◽  
Jian Xu ◽  
Bo Zheng

Sign in / Sign up

Export Citation Format

Share Document