scholarly journals On Finding the K-best Non-projective Dependency Trees

Author(s):  
Ran Zmigrod ◽  
Tim Vieira ◽  
Ryan Cotterell
Keyword(s):  
Author(s):  
Denys Duchier ◽  
Ralph Debusmann
Keyword(s):  

2013 ◽  
Vol 1 ◽  
pp. 267-278 ◽  
Author(s):  
Giorgio Satta ◽  
Marco Kuhlmann

Head splitting techniques have been successfully exploited to improve the asymptotic runtime of parsing algorithms for projective dependency trees, under the arc-factored model. In this article we extend these techniques to a class of non-projective dependency trees, called well-nested dependency trees with block-degree at most 2, which has been previously investigated in the literature. We define a structural property that allows head splitting for these trees, and present two algorithms that improve over the runtime of existing algorithms at no significant loss in coverage.


2016 ◽  
Vol 42 (4) ◽  
pp. 727-761 ◽  
Author(s):  
David Weir ◽  
Julie Weeds ◽  
Jeremy Reffin ◽  
Thomas Kober

We present a new framework for compositional distributional semantics in which the distributional contexts of lexemes are expressed in terms of anchored packed dependency trees. We show that these structures have the potential to capture the full sentential contexts of a lexeme and provide a uniform basis for the composition of distributional knowledge in a way that captures both mutual disambiguation and generalization.


1998 ◽  
Vol 4 (2) ◽  
pp. 97-114 ◽  
Author(s):  
DEKANG LIN

With the emergence of broad-coverage parsers, quantitative evaluation of parsers becomes increasingly more important. We propose a dependency-based method for evaluating broad-coverage parsers that offers more meaningful performance measures than previous approaches. We also present a structural pattern-matching mechanism that can be used to eliminate inconsequential differences among different parse trees. Previous evaluation methods have only evaluated the overall performance of parsers. The dependency-based method can also evaluate parsers with respect to different kinds of grammatical relationships or different types of lexical categories. An algorithm for transforming constituency trees into dependency trees is presented, which makes the evaluation method applicable to both constituency grammars and dependency grammars.


2021 ◽  
pp. 58-72
Author(s):  
Defeng Xie ◽  
Jianmin Ji ◽  
Jiafei Xu ◽  
Ran Ji

Author(s):  
Richard Socher ◽  
Andrej Karpathy ◽  
Quoc V. Le ◽  
Christopher D. Manning ◽  
Andrew Y. Ng

Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images. However, the sentence vectors of previous models cannot accurately represent visually grounded meaning. We introduce the DT-RNN model which uses dependency trees to embed sentences into a vector space in order to retrieve images that are described by those sentences. Unlike previous RNN-based models which use constituency trees, DT-RNNs naturally focus on the action and agents in a sentence. They are better able to abstract from the details of word order and syntactic expression. DT-RNNs outperform other recursive and recurrent neural networks, kernelized CCA and a bag-of-words baseline on the tasks of finding an image that fits a sentence description and vice versa. They also give more similar representations to sentences that describe the same image.


Sign in / Sign up

Export Citation Format

Share Document