scholarly journals Computing Graph Neural Networks: A Survey from Algorithms to Accelerators

2022 ◽  
Vol 54 (9) ◽  
pp. 1-38
Author(s):  
Sergi Abadal ◽  
Akshay Jain ◽  
Robert Guirado ◽  
Jorge López-Alonso ◽  
Eduard Alarcón

Graph Neural Networks (GNNs) have exploded onto the machine learning scene in recent years owing to their capability to model and learn from graph-structured data. Such an ability has strong implications in a wide variety of fields whose data are inherently relational, for which conventional neural networks do not perform well. Indeed, as recent reviews can attest, research in the area of GNNs has grown rapidly and has lead to the development of a variety of GNN algorithm variants as well as to the exploration of ground-breaking applications in chemistry, neurology, electronics, or communication networks, among others. At the current stage research, however, the efficient processing of GNNs is still an open challenge for several reasons. Besides of their novelty, GNNs are hard to compute due to their dependence on the input graph, their combination of dense and very sparse operations, or the need to scale to huge graphs in some applications. In this context, this article aims to make two main contributions. On the one hand, a review of the field of GNNs is presented from the perspective of computing. This includes a brief tutorial on the GNN fundamentals, an overview of the evolution of the field in the last decade, and a summary of operations carried out in the multiple phases of different GNN algorithm variants. On the other hand, an in-depth analysis of current software and hardware acceleration schemes is provided, from which a hardware-software, graph-aware, and communication-centric vision for GNN accelerators is distilled.

2020 ◽  
Vol 34 (04) ◽  
pp. 3898-3905 ◽  
Author(s):  
Claudio Gallicchio ◽  
Alessio Micheli

We address the efficiency issue for the construction of a deep graph neural network (GNN). The approach exploits the idea of representing each input graph as a fixed point of a dynamical system (implemented through a recurrent neural network), and leverages a deep architectural organization of the recurrent units. Efficiency is gained by many aspects, including the use of small and very sparse networks, where the weights of the recurrent units are left untrained under the stability condition introduced in this work. This can be viewed as a way to study the intrinsic power of the architecture of a deep GNN, and also to provide insights for the set-up of more complex fully-trained models. Through experimental results, we show that even without training of the recurrent connections, the architecture of small deep GNN is surprisingly able to achieve or improve the state-of-the-art performance on a significant set of tasks in the field of graphs classification.


2021 ◽  
Vol 2 (4) ◽  
pp. 1-8
Author(s):  
Martin Happ ◽  
Matthias Herlich ◽  
Christian Maier ◽  
Jia Lei Du ◽  
Peter Dorfinger

Modeling communication networks to predict performance such as delay and jitter is important for evaluating and optimizing them. In recent years, neural networks have been used to do this, which may have advantages over existing models, for example from queueing theory. One of these neural networks is RouteNet, which is based on graph neural networks. However, it is based on simplified assumptions. One key simplification is the restriction to a single scheduling policy, which describes how packets of different flows are prioritized for transmission. In this paper we propose a solution that supports multiple scheduling policies (Strict Priority, Deficit Round Robin, Weighted Fair Queueing) and can handle mixed scheduling policies in a single communication network. Our solution is based on the RouteNet architecture as part of the "Graph Neural Network Challenge". We achieved a mean absolute percentage error under 1% with our extended model on the evaluation data set from the challenge. This takes neural-network-based delay estimation one step closer to practical use.


2020 ◽  
Vol 67 ◽  
pp. 757-795
Author(s):  
Dieuwke Hupkes ◽  
Verna Dankers ◽  
Mathijs Mul ◽  
Elia Bruni

Despite a multitude of empirical studies, little consensus exists on whether neural networks are able to generalise compositionally, a controversy that, in part, stems from a lack of agreement about what it means for a neural model to be compositional. As a response to this controversy, we present a set of tests that provide a bridge between, on the one hand, the vast amount of linguistic and philosophical theory about compositionality of language and, on the other, the successful neural models of language. We collect different interpretations of compositionality and translate them into five theoretically grounded tests for models that are formulated on a task-independent level. In particular, we provide tests to investigate (i) if models systematically recombine known parts and rules (ii) if models can extend their predictions beyond the length they have seen in the training data (iii) if models’ composition operations are local or global (iv) if models’ predictions are robust to synonym substitutions and (v) if models favour rules or exceptions during training. To demonstrate the usefulness of this evaluation paradigm, we instantiate these five tests on a highly compositional data set which we dub PCFG SET and apply the resulting tests to three popular sequence-to-sequence models: a recurrent, a convolution-based and a transformer model. We provide an in-depth analysis of the results, which uncover the strengths and weaknesses of these three architectures and point to potential areas of improvement.


Author(s):  
Dieuwke Hupkes ◽  
Verna Dankers ◽  
Mathijs Mul ◽  
Elia Bruni

Despite a multitude of empirical studies, little consensus exists on whether neural networks are able to generalise compositionally. As a response to this controversy, we present a set of tests that provide a bridge between, on the one hand, the vast amount of linguistic and philosophical theory about compositionality of language and, on the other, the successful neural models of language. We collect different interpretations of compositionality and translate them into five theoretically grounded tests for models that are formulated on a task-independent level. To demonstrate the usefulness of this evaluation paradigm, we instantiate these five tests on a highly compositional data set which we dub PCFG SET, apply the resulting tests to three popular sequence-to-sequence models and provide an in-depth analysis of the results.


2019 ◽  
pp. 47-71
Author(s):  
Petr M. Mozias

China’s Belt and Road Initiative could be treated ambiguously. On the one hand, it is intended to transform the newly acquired economic potential of that country into its higher status in the world. China invites a lot of nations to build up gigantic transit corridors by joint efforts, and doing so it applies productively its capital and technologies. International transactions in RMB are also being expanded. But, on the other hand, the Belt and Road Initiative is also a necessity for China to cope with some evident problems of its current stage of development, such as industrial overcapacity, overdependence on imports of raw materials from a narrow circle of countries, and a subordinate status in global value chains. For Russia participation in the Belt and Road Initiative may be fruitful, since the very character of that project provides us with a space to manoeuvre. By now, Russian exports to China consist primarily of fuels and other commodities. More active industrial policy is needed to correct this situation . A flexible framework of the Belt and Road Initiative is more suitable for this objective to be achieved, rather than traditional forms of regional integration, such as a free trade zone.


2020 ◽  
Author(s):  
Artur Schweidtmann ◽  
Jan Rittig ◽  
Andrea König ◽  
Martin Grohe ◽  
Alexander Mitsos ◽  
...  

<div>Prediction of combustion-related properties of (oxygenated) hydrocarbons is an important and challenging task for which quantitative structure-property relationship (QSPR) models are frequently employed. Recently, a machine learning method, graph neural networks (GNNs), has shown promising results for the prediction of structure-property relationships. GNNs utilize a graph representation of molecules, where atoms correspond to nodes and bonds to edges containing information about the molecular structure. More specifically, GNNs learn physico-chemical properties as a function of the molecular graph in a supervised learning setup using a backpropagation algorithm. This end-to-end learning approach eliminates the need for selection of molecular descriptors or structural groups, as it learns optimal fingerprints through graph convolutions and maps the fingerprints to the physico-chemical properties by deep learning. We develop GNN models for predicting three fuel ignition quality indicators, i.e., the derived cetane number (DCN), the research octane number (RON), and the motor octane number (MON), of oxygenated and non-oxygenated hydrocarbons. In light of limited experimental data in the order of hundreds, we propose a combination of multi-task learning, transfer learning, and ensemble learning. The results show competitive performance of the proposed GNN approach compared to state-of-the-art QSPR models making it a promising field for future research. The prediction tool is available via a web front-end at www.avt.rwth-aachen.de/gnn.</div>


2020 ◽  
Author(s):  
Zheng Lian ◽  
Jianhua Tao ◽  
Bin Liu ◽  
Jian Huang ◽  
Zhanlei Yang ◽  
...  

Author(s):  
Steven J. R. Ellis

This chapter introduces the topic of retailing in the Roman world and outlines some of the important developments in its study. It establishes why the focus of the book zooms in from retailing in general to the retailing of food and drink in particular; thus from shops to bars. Another aim is to demonstrate the scope of the study, which is an in-depth analysis of specific shops and bars at Pompeii on the one hand, and on the other a broader survey of the retail landscapes of cities throughout the Roman world. Essentially this chapter provides the theoretical and methodological framework for the book, while also arguing for the value of it in the first place.


Sign in / Sign up

Export Citation Format

Share Document