scholarly journals Message-Passing Neural Networks Learn Little’s Law

2019 ◽  
Vol 23 (2) ◽  
pp. 274-277 ◽  
Author(s):  
Krzysztof Rusek ◽  
Piotr Cholda
2021 ◽  
Vol 174 ◽  
pp. 114711
Author(s):  
Tien Huu Do ◽  
Duc Minh Nguyen ◽  
Giannis Bekoulis ◽  
Adrian Munteanu ◽  
Nikos Deligiannis

2011 ◽  
Vol 59 (3) ◽  
pp. 535-535 ◽  
Author(s):  
David Simchi-Levi ◽  
Michael A. Trick

ICANN ’93 ◽  
1993 ◽  
pp. 1054-1057
Author(s):  
B. Kreimeier ◽  
M. Schöne ◽  
R. Steiner ◽  
R. Eckmiller

2015 ◽  
Vol 129 (13) ◽  
pp. 12-15
Author(s):  
Alap Kango ◽  
Shivdatta Patil ◽  
Tejas Ghanekar ◽  
Tejas Dhawale ◽  
S.S. Sambare

Author(s):  
George Dasoulas ◽  
Ludovic Dos Santos ◽  
Kevin Scaman ◽  
Aladin Virmaux

In this paper, we show that a simple coloring scheme can improve, both theoretically and empirically, the expressive power of Message Passing Neural Networks (MPNNs). More specifically, we introduce a graph neural network called Colored Local Iterative Procedure (CLIP) that uses colors to disambiguate identical node attributes, and show that this representation is a universal approximator of continuous functions on graphs with node attributes. Our method relies on separability, a key topological characteristic that allows to extend well-chosen neural networks into universal representations. Finally, we show experimentally that CLIP is capable of capturing structural characteristics that traditional MPNNs fail to distinguish, while being state-of-the-art on benchmark graph classification datasets.


Sign in / Sign up

Export Citation Format

Share Document