scholarly journals Sentential Semantic Dependency Parsing for Vietnamese

2021 ◽  
Vol 2 (4) ◽  
Author(s):  
Tuyen Thi-Thanh Do ◽  
Dang Tuan Nguyen
2013 ◽  
Vol 46 ◽  
pp. 203-233 ◽  
Author(s):  
H. Zhao ◽  
X. Zhang ◽  
C. Kit

Semantic parsing, i.e., the automatic derivation of meaning representation such as an instantiated predicate-argument structure for a sentence, plays a critical role in deep processing of natural language. Unlike all other top systems of semantic dependency parsing that have to rely on a pipeline framework to chain up a series of submodels each specialized for a specific subtask, the one presented in this article integrates everything into one model, in hopes of achieving desirable integrity and practicality for real applications while maintaining a competitive performance. This integrative approach tackles semantic parsing as a word pair classification problem using a maximum entropy classifier. We leverage adaptive pruning of argument candidates and large-scale feature selection engineering to allow the largest feature space ever in use so far in this field, it achieves a state-of-the-art performance on the evaluation data set for CoNLL-2008 shared task, on top of all but one top pipeline system, confirming its feasibility and effectiveness.


2020 ◽  
Author(s):  
Zixia Jia ◽  
Youmi Ma ◽  
Jiong Cai ◽  
Kewei Tu

2015 ◽  
Vol 3 ◽  
pp. 271-282 ◽  
Author(s):  
Haitong Yang ◽  
Tao Zhuang ◽  
Chengqing Zong

In current systems for syntactic and semantic dependency parsing, people usually define a very high-dimensional feature space to achieve good performance. But these systems often suffer severe performance drops on out-of-domain test data due to the diversity of features of different domains. This paper focuses on how to relieve this domain adaptation problem with the help of unlabeled target domain data. We propose a deep learning method to adapt both syntactic and semantic parsers. With additional unlabeled target domain data, our method can learn a latent feature representation (LFR) that is beneficial to both domains. Experiments on English data in the CoNLL 2009 shared task show that our method largely reduced the performance drop on out-of-domain test data. Moreover, we get a Macro F1 score that is 2.32 points higher than the best system in the CoNLL 2009 shared task in out-of-domain tests.


2017 ◽  
Author(s):  
Weiwei Sun ◽  
Junjie Cao ◽  
Xiaojun Wan

Sign in / Sign up

Export Citation Format

Share Document