Menu Hierarchy Generation Based on Syntactic Dependency Structures in Item Descriptions

Author(s):  
Yukio Horiguchi ◽  
Shinsu An ◽  
Tetsuo Sawaragi ◽  
Hiroaki Nakanishi
2015 ◽  
Vol 22 (6) ◽  
pp. 939-974 ◽  
Author(s):  
MIGUEL BALLESTEROS ◽  
BERND BOHNET ◽  
SIMON MILLE ◽  
LEO WANNER

Abstract‘Deep-syntactic’ dependency structures that capture the argumentative, attributive and coordinative relations between full words of a sentence have a great potential for a number of NLP-applications. The abstraction degree of these structures is in between the output of a syntactic dependency parser (connected trees defined over all words of a sentence and language-specific grammatical functions) and the output of a semantic parser (forests of trees defined over individual lexemes or phrasal chunks and abstract semantic role labels which capture the frame structures of predicative elements and drop all attributive and coordinative dependencies). We propose a parser that provides deep-syntactic structures. The parser has been tested on Spanish, English and Chinese.


2018 ◽  
Author(s):  
Roger Philip Levy ◽  
Evelina Fedorenko ◽  
Edward Gibson ◽  
Mara Breen

In most languages, most of the syntactic dependency relations found in any given sentenceare PROJECTIVE: the word–word dependencies in the sentence do not cross each other. Somesyntactic dependency relations, however, are NON-PROJECTIVE: some of their word–worddependencies cross each other. Non-projective dependencies are both rarer and more computationallycomplex than projective dependencies; hence, it is of natural interest to investigatewhether there are any processing costs specific to non-projective dependencies, andwhether factors known to influence processing of projective dependencies also affect nonprojectivedependency processing. We report three self-paced reading studies, togetherwith corpus and sentence completion studies, investigating the comprehension difficultyassociated with the non-projective dependencies created by the extraposition of relativeclauses in English. We find that extraposition over either verbs or prepositional phrasescreates comprehension difficulty, and that this difficulty is consistent with probabilisticsyntactic expectations estimated from corpora. Furthermore, we find that manipulatingthe expectation that a given noun will have a postmodifying relative clause can modulateand even neutralize the difficulty associated with extraposition. Our experiments rule outaccounts based purely on derivational complexity and/or dependency locality in terms oflinear positioning. Our results demonstrate that comprehenders maintain probabilisticsyntactic expectations that persist beyond projective-dependency structures, and suggestthat it may be possible to explain observed patterns of comprehension difficulty associatedwith extraposition entirely through probabilistic expectations.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Kun Sun ◽  
Rong Wang ◽  
Wenxin Xiong

Abstract The notion of genre has been widely explored using quantitative methods from both lexical and syntactical perspectives. However, discourse structure has rarely been used to examine genre. Mostly concerned with the interrelation of discourse units, discourse structure can play a crucial role in genre analysis. Nevertheless, few quantitative studies have explored genre distinctions from a discourse structure perspective. Here, we use two English discourse corpora (RST-DT and GUM) to investigate discourse structure from a novel viewpoint. The RST-DT is divided into four small subcorpora distinguished according to genre, and another corpus (GUM) containing seven genres are used for cross-verification. An RST (rhetorical structure theory) tree is converted into dependency representations by taking information from RST annotations to calculate the discourse distance through a process similar to that used to calculate syntactic dependency distance. Moreover, the data on dependency representations deriving from the two corpora are readily convertible into network data. Afterwards, we examine different genres in the two corpora by combining discourse distance and discourse network. The two methods are mutually complementary in comprehensively revealing the distinctiveness of various genres. Accordingly, we propose an effective quantitative method for assessing genre differences using discourse distance and discourse network. This quantitative study can help us better understand the nature of genre.


2013 ◽  
Vol 3 (3) ◽  
pp. 157-168
Author(s):  
Masato Shirai ◽  
Takashi Yanagisawa ◽  
Takao Miura

Author(s):  
John Carroll

This chapter introduces key concepts and techniques for natural-language parsing: that is, finding the grammatical structure of sentences. The chapter introduces the fundamental algorithms for parsing with context-free (CF) phrase structure grammars, how these deal with ambiguous grammars, and how CF grammars and associated disambiguation models can be derived from syntactically annotated text. It goes on to consider dependency analysis, and outlines the main approaches to dependency parsing based both on manually written grammars and on learning from text annotated with dependency structures. It finishes with an overview of techniques used for parsing with grammars that use feature structures to encode linguistic information.


Sign in / Sign up

Export Citation Format

Share Document