Using Short Dependency Relations from Auto-Parsed Data for Chinese Dependency Parsing

2009 ◽  
Vol 8 (3) ◽  
pp. 1-20 ◽  
Author(s):  
Wenliang Chen ◽  
Daisuke Kawahara ◽  
Kiyotaka Uchimoto ◽  
Yujie Zhang ◽  
Hitoshi Isahara
2003 ◽  
Vol 29 (4) ◽  
pp. 515-544 ◽  
Author(s):  
Kemal Oflazer

This article presents a dependency parsing scheme using an extended finite-state approach. The parser augments input representation with “channels” so that links representing syntactic dependency relations among words can be accommodated and iterates on the input a number of times to arrive at a fixed point. Intermediate configurations violating various constraints of projective dependency representations such as no crossing links and no independent items except sentential head are filtered via finite-state filters. We have applied the parser to dependency parsing of Turkish.


Proceedings ◽  
2019 ◽  
Vol 21 (1) ◽  
pp. 49
Author(s):  
Michalina Strzyz ◽  
David Vilares ◽  
Carlos Gómez-Rodríguez

Dependency parsing has been built upon the idea of using parsing methods based on shift-reduce or graph-based algorithms in order to identify binary dependency relations between the words in a sentence. In this study we adopt a radically different approach and cast full dependency parsing as a pure sequence tagging task. In particular, we apply a linearization function to the tree that results in an output label for each token that conveys information about the word’s dependency relations. We then follow a supervised strategy and train a bidirectional long short-term memory network to learn to predict such linearized trees. Contrary to the previous studies attempting this, the results show that this approach not only leads to accurate but also fast dependency parsing. Furthermore, we obtain even faster and more accurate parsers by recasting the problem as multitask learning, with a twofold objective: to reduce the output vocabulary and also to exploit hidden patterns coming from a second parsing paradigm (constituent grammars) when used as an auxiliary task.


Author(s):  
Qinyuan Xiang ◽  
Weijiang Li ◽  
Hui Deng ◽  
Feng Wang

Author(s):  
Cunli Mao ◽  
Zhibo Man ◽  
Zhengtao Yu ◽  
Zhenhan Wang ◽  
Shengxiang Gao ◽  
...  

Author(s):  
Shumin Shi ◽  
Dan Luo ◽  
Xing Wu ◽  
Congjun Long ◽  
Heyan Huang

Dependency parsing is an important task for Natural Language Processing (NLP). However, a mature parser requires a large treebank for training, which is still extremely costly to create. Tibetan is a kind of extremely low-resource language for NLP, there is no available Tibetan dependency treebank, which is currently obtained by manual annotation. Furthermore, there are few related kinds of research on the construction of treebank. We propose a novel method of multi-level chunk-based syntactic parsing to complete constituent-to-dependency treebank conversion for Tibetan under scarce conditions. Our method mines more dependencies of Tibetan sentences, builds a high-quality Tibetan dependency tree corpus, and makes fuller use of the inherent laws of the language itself. We train the dependency parsing models on the dependency treebank obtained by the preliminary transformation. The model achieves 86.5% accuracy, 96% LAS, and 97.85% UAS, which exceeds the optimal results of existing conversion methods. The experimental results show that our method has the potential to use a low-resource setting, which means we not only solve the problem of scarce Tibetan dependency treebank but also avoid needless manual annotation. The method embodies the regularity of strong knowledge-guided linguistic analysis methods, which is of great significance to promote the research of Tibetan information processing.


2009 ◽  
Vol E92-D (10) ◽  
pp. 2122-2136 ◽  
Author(s):  
Sutee SUDPRASERT ◽  
Asanee KAWTRAKUL ◽  
Christian BOITET ◽  
Vincent BERMENT

Sign in / Sign up

Export Citation Format

Share Document