scholarly journals CosG: A Graph-Based Contrastive Learning Method for Fact Verification

Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3471
Author(s):  
Chonghao Chen ◽  
Jianming Zheng ◽  
Honghui Chen

Fact verification aims to verify the authenticity of a given claim based on the retrieved evidence from Wikipedia articles. Existing works mainly focus on enhancing the semantic representation of evidence, e.g., introducing the graph structure to model the evidence relation. However, previous methods can’t well distinguish semantic-similar claims and evidences with distinct authenticity labels. In addition, the performances of graph-based models are limited by the over-smoothing problem of graph neural networks. To this end, we propose a graph-based contrastive learning method for fact verification abbreviated as CosG, which introduces a contrastive label-supervised task to help the encoder learn the discriminative representations for different-label claim-evidence pairs, as well as an unsupervised graph-contrast task, to alleviate the unique node features loss in the graph propagation. We conduct experiments on FEVER, a large benchmark dataset for fact verification. Experimental results show the superiority of our proposal against comparable baselines, especially for the claims that need multiple-evidences to verify. In addition, CosG presents better model robustness on the low-resource scenario.

2021 ◽  
pp. 1-16
Author(s):  
Hiromi Nakagawa ◽  
Yusuke Iwasawa ◽  
Yutaka Matsuo

Recent advancements in computer-assisted learning systems have caused an increase in the research in knowledge tracing, wherein student performance is predicted over time. Student coursework can potentially be structured as a graph. Incorporating this graph-structured nature into a knowledge tracing model as a relational inductive bias can improve its performance; however, previous methods, such as deep knowledge tracing, did not consider such a latent graph structure. Inspired by the recent successes of graph neural networks (GNNs), we herein propose a GNN-based knowledge tracing method, i.e., graph-based knowledge tracing. Casting the knowledge structure as a graph enabled us to reformulate the knowledge tracing task as a time-series node-level classification problem in the GNN. As the knowledge graph structure is not explicitly provided in most cases, we propose various implementations of the graph structure. Empirical validations on two open datasets indicated that our method could potentially improve the prediction of student performance and demonstrated more interpretable predictions compared to those of the previous methods, without the requirement of any additional information.


Data ◽  
2022 ◽  
Vol 7 (1) ◽  
pp. 10
Author(s):  
Davide Buffelli ◽  
Fabio Vandin

Graph Neural Networks (GNNs) rely on the graph structure to define an aggregation strategy where each node updates its representation by combining information from its neighbours. A known limitation of GNNs is that, as the number of layers increases, information gets smoothed and squashed and node embeddings become indistinguishable, negatively affecting performance. Therefore, practical GNN models employ few layers and only leverage the graph structure in terms of limited, small neighbourhoods around each node. Inevitably, practical GNNs do not capture information depending on the global structure of the graph. While there have been several works studying the limitations and expressivity of GNNs, the question of whether practical applications on graph structured data require global structural knowledge or not remains unanswered. In this work, we empirically address this question by giving access to global information to several GNN models, and observing the impact it has on downstream performance. Our results show that global information can in fact provide significant benefits for common graph-related tasks. We further identify a novel regularization strategy that leads to an average accuracy improvement of more than 5% on all considered tasks.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Hussain Hussain ◽  
Tomislav Duricic ◽  
Elisabeth Lex ◽  
Denis Helic ◽  
Roman Kern

AbstractGraph Neural Networks (GNNs) are effective in many applications. Still, there is a limited understanding of the effect of common graph structures on the learning process of GNNs. To fill this gap, we study the impact of community structure and homophily on the performance of GNNs in semi-supervised node classification on graphs. Our methodology consists of systematically manipulating the structure of eight datasets, and measuring the performance of GNNs on the original graphs and the change in performance in the presence and the absence of community structure and/or homophily. Our results show the major impact of both homophily and communities on the classification accuracy of GNNs, and provide insights on their interplay. In particular, by analyzing community structure and its correlation with node labels, we are able to make informed predictions on the suitability of GNNs for classification on a given graph. Using an information-theoretic metric for community-label correlation, we devise a guideline for model selection based on graph structure. With our work, we provide insights on the abilities of GNNs and the impact of common network phenomena on their performance. Our work improves model selection for node classification in semi-supervised settings.


2021 ◽  
Vol 4 ◽  
Author(s):  
Paul Y. Wang ◽  
Sandalika Sapra ◽  
Vivek Kurien George ◽  
Gabriel A. Silva

Although a number of studies have explored deep learning in neuroscience, the application of these algorithms to neural systems on a microscopic scale, i.e. parameters relevant to lower scales of organization, remains relatively novel. Motivated by advances in whole-brain imaging, we examined the performance of deep learning models on microscopic neural dynamics and resulting emergent behaviors using calcium imaging data from the nematode C. elegans. As one of the only species for which neuron-level dynamics can be recorded, C. elegans serves as the ideal organism for designing and testing models bridging recent advances in deep learning and established concepts in neuroscience. We show that neural networks perform remarkably well on both neuron-level dynamics prediction and behavioral state classification. In addition, we compared the performance of structure agnostic neural networks and graph neural networks to investigate if graph structure can be exploited as a favourable inductive bias. To perform this experiment, we designed a graph neural network which explicitly infers relations between neurons from neural activity and leverages the inferred graph structure during computations. In our experiments, we found that graph neural networks generally outperformed structure agnostic models and excel in generalization on unseen organisms, implying a potential path to generalizable machine learning in neuroscience.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Pengpeng Zhou ◽  
Yao Luo ◽  
Nianwen Ning ◽  
Zhen Cao ◽  
Bingjing Jia ◽  
...  

In the era of the rapid development of today’s Internet, people often feel overwhelmed by vast official news streams or unofficial self-media tweets. To help people obtain the news topics they care about, there is a growing need for systems that can extract important events from this amount of data and construct the evolution procedure of events logically into a story. Most existing methods treat event detection and evolution as two independent subtasks under an integrated pipeline setting. However, the interdependence between these two subtasks is often ignored, which leads to a biased propagation. Besides, due to the limitations of news documents’ semantic representation, the performance of event detection and evolution is still limited. To tackle these problems, we propose a Joint Event Detection and Evolution (JEDE) model, to detect events and discover the event evolution relationships from news streams in this paper. Specifically, the proposed JEDE model is built upon the Siamese network, which first introduces the bidirectional GRU attention network to learn the vector-based semantic representation for news documents shared across two subtask networks. Then, two continuous similarity metrics are learned using stacked neural networks to judge whether two news documents are related to the same event or two events are related to the same story. Furthermore, due to the limited available dataset with ground truths, we make efforts to construct a new dataset, named EDENS, which contains valid labels of events and stories. The experimental results on this newly created dataset demonstrate that, thanks to the shared representation and joint training, the proposed model consistently achieves significant improvements over the baseline methods.


Author(s):  
Xiaobin Tang ◽  
Jing Zhang ◽  
Bo Chen ◽  
Yang Yang ◽  
Hong Chen ◽  
...  

Knowledge graph alignment aims to link equivalent entities across different knowledge graphs. To utilize both the graph structures and the side information such as name, description and attributes, most of the works propagate the side information especially names through linked entities by graph neural networks. However, due to the heterogeneity of different knowledge graphs, the alignment accuracy will be suffered from aggregating different neighbors. This work presents an interaction model to only leverage the side information. Instead of aggregating neighbors, we compute the interactions between neighbors which can capture fine-grained matches of neighbors. Similarly, the interactions of attributes are also modeled. Experimental results show that our model significantly outperforms the best state-of-the-art methods by 1.9-9.7% in terms of HitRatio@1 on the dataset DBP15K.


2021 ◽  
Vol 12 ◽  
Author(s):  
Xiao-Meng Zhang ◽  
Li Liang ◽  
Lin Liu ◽  
Ming-Jing Tang

Graph neural networks (GNNs), as a branch of deep learning in non-Euclidean space, perform particularly well in various tasks that process graph structure data. With the rapid accumulation of biological network data, GNNs have also become an important tool in bioinformatics. In this research, a systematic survey of GNNs and their advances in bioinformatics is presented from multiple perspectives. We first introduce some commonly used GNN models and their basic principles. Then, three representative tasks are proposed based on the three levels of structural information that can be learned by GNNs: node classification, link prediction, and graph generation. Meanwhile, according to the specific applications for various omics data, we categorize and discuss the related studies in three aspects: disease prediction, drug discovery, and biomedical imaging. Based on the analysis, we provide an outlook on the shortcomings of current studies and point out their developing prospect. Although GNNs have achieved excellent results in many biological tasks at present, they still face challenges in terms of low-quality data processing, methodology, and interpretability and have a long road ahead. We believe that GNNs are potentially an excellent method that solves various biological problems in bioinformatics research.


2020 ◽  
Vol 34 (04) ◽  
pp. 3308-3315 ◽  
Author(s):  
Lei Cai ◽  
Shuiwang Ji

Deep models can be made scale-invariant when trained with multi-scale information. Images can be easily made multi-scale, given their grid-like structures. Extending this to generic graphs poses major challenges. For example, in link prediction tasks, inputs are represented as graphs consisting of nodes and edges. Currently, the state-of-the-art model for link prediction uses supervised heuristic learning, which learns graph structure features centered on two target nodes. It then learns graph neural networks to predict the existence of links based on graph structure features. Thus, the performance of link prediction models highly depends on graph structure features. In this work, we propose a novel node aggregation method that can transform the enclosing subgraph into different scales and preserve the relationship between two target nodes for link prediction. A theory for analyzing the information loss during the re-scaling procedure is also provided. Graphs in different scales can provide scale-invariant information, which enables graph neural networks to learn invariant features and improve link prediction performance. Our experimental results on 14 datasets from different areas demonstrate that our proposed method outperforms the state-of-the-art methods by employing multi-scale graphs without additional parameters.


Sign in / Sign up

Export Citation Format

Share Document