A method based on rules and machine learning for logic form identification in Spanish

2015 ◽  
Vol 23 (1) ◽  
pp. 131-153
Author(s):  
F. MARTÍNEZ-SANTIAGO ◽  
M. C. DÍAZ-GALIANO ◽  
M. Á. GARCÍA-CUMBRERAS ◽  
A. MONTEJO-RÁEZ

AbstractLogic Forms (LF) are simple, first-order logic knowledge representations of natural language sentences. Each noun, verb, adjective, adverb, pronoun, preposition and conjunction generates a predicate. LF systems usually identify the syntactic function by means of syntactic rules but this approach is difficult to apply to languages with a high syntax flexibility and ambiguity, for example, Spanish. In this study, we present a mixed method for the derivation of the LF of sentences in Spanish that allows the combination of hard-coded rules and a classifier inspired on semantic role labeling. Thus, the main novelty of our proposal is the way the classifier is applied to generate the predicates of the verbs, while rules are used to translate the rest of the predicates, which are more straightforward and unambiguous than the verbal ones. The proposed mixed system uses a supervised classifier to integrate syntactic and semantic information in order to help overcome the inherent ambiguity of Spanish syntax. This task is accomplished in a similar way to the semantic role labeling task. We use properties extracted from the AnCora-ES corpus in order to train a classifier. A rule-based system is used in order to obtain the LF from the rest of the phrase. The rules are obtained by exploring the syntactic tree of the phrase and encoding the syntactic production rules. The LF algorithm has been evaluated by using shallow parsing with some straightforward Spanish phrases. The verb argument labeling task achieves 84% precision and the proposed mixed LFi method surpasses 11% a system based only on rules.

2011 ◽  
Vol 22 (2) ◽  
pp. 222-232 ◽  
Author(s):  
Shi-Qi LI ◽  
Tie-Jun ZHAO ◽  
Han-Jing LI ◽  
Peng-Yuan LIU ◽  
Shui LIU

2011 ◽  
Vol 47 (3) ◽  
pp. 349-362 ◽  
Author(s):  
GuoDong Zhou ◽  
Junhui Li ◽  
Jianxi Fan ◽  
Qiaoming Zhu

2021 ◽  
pp. 1-48
Author(s):  
Zuchao Li ◽  
Hai Zhao ◽  
Shexia He ◽  
Jiaxun Cai

Abstract Semantic role labeling (SRL) is dedicated to recognizing the semantic predicate-argument structure of a sentence. Previous studies in terms of traditional models have shown syntactic information can make remarkable contributions to SRL performance; however, the necessity of syntactic information was challenged by a few recent neural SRL studies that demonstrate impressive performance without syntactic backbones and suggest that syntax information becomes much less important for neural semantic role labeling, especially when paired with recent deep neural network and large-scale pre-trained language models. Despite this notion, the neural SRL field still lacks a systematic and full investigation on the relevance of syntactic information in SRL, for both dependency and both monolingual and multilingual settings. This paper intends to quantify the importance of syntactic information for neural SRL in the deep learning framework. We introduce three typical SRL frameworks (baselines), sequence-based, tree-based, and graph-based, which are accompanied by two categories of exploiting syntactic information: syntax pruningbased and syntax feature-based. Experiments are conducted on the CoNLL-2005, 2009, and 2012 benchmarks for all languages available, and results show that neural SRL models can still benefit from syntactic information under certain conditions. Furthermore, we show the quantitative significance of syntax to neural SRL models together with a thorough empirical survey using existing models.


Sign in / Sign up

Export Citation Format

Share Document