A New Method of Creating Patent Technology-Effect Matrix Based on Semantic Role Labeling

Author(s):  
Yanqing He ◽  
Ying Li ◽  
Lingen Meng
2021 ◽  
Vol 11 (20) ◽  
pp. 9423
Author(s):  
Algirdas Laukaitis ◽  
Egidijus Ostašius ◽  
Darius Plikynas

This paper presents a new method for semantic parsing with upper ontologies using FrameNet annotations and BERT-based sentence context distributed representations. The proposed method leverages WordNet upper ontology mapping and PropBank-style semantic role labeling and it is designed for long text parsing. Given a PropBank, FrameNet and WordNet-labeled corpus, a model is proposed that annotates the set of semantic roles with upper ontology concept names. These annotations are used for the identification of predicates and arguments that are relevant for virtual reality simulators in a 3D world with a built-in physics engine. It is shown that state-of-the-art results can be achieved in relation to semantic role labeling with upper ontology concepts. Additionally, a manually annotated corpus was created using this new method and is presented in this study. It is suggested as a benchmark for future studies relevant to semantic parsing.


2011 ◽  
Vol 22 (2) ◽  
pp. 222-232 ◽  
Author(s):  
Shi-Qi LI ◽  
Tie-Jun ZHAO ◽  
Han-Jing LI ◽  
Peng-Yuan LIU ◽  
Shui LIU

2011 ◽  
Vol 47 (3) ◽  
pp. 349-362 ◽  
Author(s):  
GuoDong Zhou ◽  
Junhui Li ◽  
Jianxi Fan ◽  
Qiaoming Zhu

2021 ◽  
pp. 1-48
Author(s):  
Zuchao Li ◽  
Hai Zhao ◽  
Shexia He ◽  
Jiaxun Cai

Abstract Semantic role labeling (SRL) is dedicated to recognizing the semantic predicate-argument structure of a sentence. Previous studies in terms of traditional models have shown syntactic information can make remarkable contributions to SRL performance; however, the necessity of syntactic information was challenged by a few recent neural SRL studies that demonstrate impressive performance without syntactic backbones and suggest that syntax information becomes much less important for neural semantic role labeling, especially when paired with recent deep neural network and large-scale pre-trained language models. Despite this notion, the neural SRL field still lacks a systematic and full investigation on the relevance of syntactic information in SRL, for both dependency and both monolingual and multilingual settings. This paper intends to quantify the importance of syntactic information for neural SRL in the deep learning framework. We introduce three typical SRL frameworks (baselines), sequence-based, tree-based, and graph-based, which are accompanied by two categories of exploiting syntactic information: syntax pruningbased and syntax feature-based. Experiments are conducted on the CoNLL-2005, 2009, and 2012 benchmarks for all languages available, and results show that neural SRL models can still benefit from syntactic information under certain conditions. Furthermore, we show the quantitative significance of syntax to neural SRL models together with a thorough empirical survey using existing models.


Sign in / Sign up

Export Citation Format

Share Document