Detecting Semantic Ambiguity
In this article, we investigate ambiguity in syntactic annotation. The ambiguity in question is inherent in a way that even human annotators interpret the meaning differently. In our experiment, we detect potential structurally ambiguous sentences with Constraint Grammar rules. In the linguistic phenomena we investigate, structural ambiguity is primarily caused by word order. The potentially ambiguous particle or adverbial is located between the main verb and the (participial) NP. After detecting the structures, we analyze how many of the potentially ambiguous cases are actually ambiguous using the double-blind method. We rank the sentences captured by the rules on a 1 to 5 scale to indicate which reading the annotator regards as the primary one. The results indicate that 67% of the sentences are ambiguous. Introducing ambiguity in the treebank/parsebank increases the informativeness of the representation since both correct analyses are presented.