scholarly journals Deep Attention Diffusion Graph Neural Networks for Text Classification

Author(s):  
Yonghao Liu ◽  
Renchu Guan ◽  
Fausto Giunchiglia ◽  
Yanchun Liang ◽  
Xiaoyue Feng
2021 ◽  
Author(s):  
Ge Lan ◽  
Ye Li ◽  
Mengting Hu ◽  
Yufei Sun ◽  
Yuzhi Zhang

2020 ◽  
Vol 34 (05) ◽  
pp. 8544-8551 ◽  
Author(s):  
Giannis Nikolentzos ◽  
Antoine Tixier ◽  
Michalis Vazirgiannis

Graph neural networks have recently emerged as a very effective framework for processing graph-structured data. These models have achieved state-of-the-art performance in many tasks. Most graph neural networks can be described in terms of message passing, vertex update, and readout functions. In this paper, we represent documents as word co-occurrence networks and propose an application of the message passing framework to NLP, the Message Passing Attention network for Document understanding (MPAD). We also propose several hierarchical variants of MPAD. Experiments conducted on 10 standard text classification datasets show that our architectures are competitive with the state-of-the-art. Ablation studies reveal further insights about the impact of the different components on performance. Code is publicly available at: https://github.com/giannisnik/mpad.


Author(s):  
Yuan Xint ◽  
Linli Xu ◽  
Junliang Guo ◽  
Jiquan Li ◽  
Xin Sheng ◽  
...  

Author(s):  
Pengfei Liu ◽  
Shuaichen Chang ◽  
Xuanjing Huang ◽  
Jian Tang ◽  
Jackie Chi Kit Cheung

Recently, a large number of neural mechanisms and models have been proposed for sequence learning, of which selfattention, as exemplified by the Transformer model, and graph neural networks (GNNs) have attracted much attention. In this paper, we propose an approach that combines and draws on the complementary strengths of these two methods. Specifically, we propose contextualized non-local neural networks (CN3), which can both dynamically construct a task-specific structure of a sentence and leverage rich local dependencies within a particular neighbourhood.Experimental results on ten NLP tasks in text classification, semantic matching, and sequence labelling show that our proposed model outperforms competitive baselines and discovers task-specific dependency structures, thus providing better interpretability to users.


Author(s):  
Pengfei Liu ◽  
Jie Fu ◽  
Yue Dong ◽  
Xipeng Qiu ◽  
Jackie Chi Kit Cheung

We present two architectures for multi-task learning with neural sequence models. Our approach allows the relationships between different tasks to be learned dynamically, rather than using an ad-hoc pre-defined structure as in previous work. We adopt the idea from message-passing graph neural networks, and propose a general graph multi-task learning framework in which different tasks can communicate with each other in an effective and interpretable way. We conduct extensive experiments in text classification and sequence labelling to evaluate our approach on multi-task learning and transfer learning. The empirical results show that our models not only outperform competitive baselines, but also learn interpretable and transferable patterns across tasks.


Author(s):  
Yufeng Zhang ◽  
Xueli Yu ◽  
Zeyu Cui ◽  
Shu Wu ◽  
Zhongzhen Wen ◽  
...  

2020 ◽  
Author(s):  
Artur Schweidtmann ◽  
Jan Rittig ◽  
Andrea König ◽  
Martin Grohe ◽  
Alexander Mitsos ◽  
...  

<div>Prediction of combustion-related properties of (oxygenated) hydrocarbons is an important and challenging task for which quantitative structure-property relationship (QSPR) models are frequently employed. Recently, a machine learning method, graph neural networks (GNNs), has shown promising results for the prediction of structure-property relationships. GNNs utilize a graph representation of molecules, where atoms correspond to nodes and bonds to edges containing information about the molecular structure. More specifically, GNNs learn physico-chemical properties as a function of the molecular graph in a supervised learning setup using a backpropagation algorithm. This end-to-end learning approach eliminates the need for selection of molecular descriptors or structural groups, as it learns optimal fingerprints through graph convolutions and maps the fingerprints to the physico-chemical properties by deep learning. We develop GNN models for predicting three fuel ignition quality indicators, i.e., the derived cetane number (DCN), the research octane number (RON), and the motor octane number (MON), of oxygenated and non-oxygenated hydrocarbons. In light of limited experimental data in the order of hundreds, we propose a combination of multi-task learning, transfer learning, and ensemble learning. The results show competitive performance of the proposed GNN approach compared to state-of-the-art QSPR models making it a promising field for future research. The prediction tool is available via a web front-end at www.avt.rwth-aachen.de/gnn.</div>


Sign in / Sign up

Export Citation Format

Share Document