scholarly journals Semantic role labeling for knowledge graph extraction from text

Author(s):  
Mehwish Alam ◽  
Aldo Gangemi ◽  
Valentina Presutti ◽  
Diego Reforgiato Recupero

AbstractThis paper introduces , a new semantic role labeling method that transforms a text into a frame-oriented knowledge graph. It performs dependency parsing, identifies the words that evoke lexical frames, locates the roles and fillers for each frame, runs coercion techniques, and formalizes the results as a knowledge graph. This formal representation complies with the frame semantics used in Framester, a factual-linguistic linked data resource. We tested our method on the WSJ section of the Peen Treebank annotated with VerbNet and PropBank labels and on the Brown corpus. The evaluation has been performed according to the CoNLL Shared Task on Joint Parsing of Syntactic and Semantic Dependencies. The obtained precision, recall, and F1 values indicate that TakeFive is competitive with other existing methods such as SEMAFOR, Pikes, PathLSTM, and FRED. We finally discuss how to combine TakeFive and FRED, obtaining higher values of precision, recall, and F1 measure.

2008 ◽  
Vol 34 (2) ◽  
pp. 161-191 ◽  
Author(s):  
Kristina Toutanova ◽  
Aria Haghighi ◽  
Christopher D. Manning

We present a model for semantic role labeling that effectively captures the linguistic intuition that a semantic argument frame is a joint structure, with strong dependencies among the arguments. We show how to incorporate these strong dependencies in a statistical joint model with a rich set of features over multiple argument phrases. The proposed model substantially outperforms a similar state-of-the-art local model that does not include dependencies among different arguments. We evaluate the gains from incorporating this joint information on the Propbank corpus, when using correct syntactic parse trees as input, and when using automatically derived parse trees. The gains amount to 24.1% error reduction on all arguments and 36.8% on core arguments for gold-standard parse trees on Propbank. For automatic parse trees, the error reductions are 8.3% and 10.3% on all and core arguments, respectively. We also present results on the CoNLL 2005 shared task data set. Additionally, we explore considering multiple syntactic analyses to cope with parser noise and uncertainty.


2011 ◽  
Vol 22 (2) ◽  
pp. 222-232 ◽  
Author(s):  
Shi-Qi LI ◽  
Tie-Jun ZHAO ◽  
Han-Jing LI ◽  
Peng-Yuan LIU ◽  
Shui LIU

2011 ◽  
Vol 47 (3) ◽  
pp. 349-362 ◽  
Author(s):  
GuoDong Zhou ◽  
Junhui Li ◽  
Jianxi Fan ◽  
Qiaoming Zhu

Sign in / Sign up

Export Citation Format

Share Document