Unsupervised Coreference Resolution Using a Graph Labeling Approach

Author(s):  
Nafise Sadat Moosavi ◽  
GholamReza GhassemSani
2015 ◽  
Vol 46 (4) ◽  
pp. 493-511 ◽  
Author(s):  
Gianpiero Cabodi ◽  
Paolo Camurati ◽  
Stefano Quer

2014 ◽  
Author(s):  
Mariana S. C. Almeida ◽  
Miguel B. Almeida ◽  
André F. T. Martins

2020 ◽  
Vol 9 (11) ◽  
pp. 9311-9317
Author(s):  
K. Sivaraman ◽  
R.V. Prasad

Equitable edge coloring is a kind of graph labeling with the following restrictions. No two adjacent edges receive same label (color). and number of edges in any two color classes differ by at most one. In this work we are going to present the Fuzzy equitable edge coloring of some wheel related graphs.


Author(s):  
Abhinav Kumar ◽  
Jillian Aurisano ◽  
Barbara Di Eugenio ◽  
Andrew Johnson ◽  
Abeer Alsaiari ◽  
...  

2021 ◽  
Vol 2 (3) ◽  
pp. 100677
Author(s):  
Emile Alghoul ◽  
Jihane Basbous ◽  
Angelos Constantinou

Author(s):  
Yufei Li ◽  
Xiaoyong Ma ◽  
Xiangyu Zhou ◽  
Pengzhen Cheng ◽  
Kai He ◽  
...  

Abstract Motivation Bio-entity Coreference Resolution focuses on identifying the coreferential links in biomedical texts, which is crucial to complete bio-events’ attributes and interconnect events into bio-networks. Previously, as one of the most powerful tools, deep neural network-based general domain systems are applied to the biomedical domain with domain-specific information integration. However, such methods may raise much noise due to its insufficiency of combining context and complex domain-specific information. Results In this paper, we explore how to leverage the external knowledge base in a fine-grained way to better resolve coreference by introducing a knowledge-enhanced Long Short Term Memory network (LSTM), which is more flexible to encode the knowledge information inside the LSTM. Moreover, we further propose a knowledge attention module to extract informative knowledge effectively based on contexts. The experimental results on the BioNLP and CRAFT datasets achieve state-of-the-art performance, with a gain of 7.5 F1 on BioNLP and 10.6 F1 on CRAFT. Additional experiments also demonstrate superior performance on the cross-sentence coreferences. Supplementary information Supplementary data are available at Bioinformatics online.


Sign in / Sign up

Export Citation Format

Share Document