constraint grammars
Recently Published Documents


TOTAL DOCUMENTS

11
(FIVE YEARS 1)

H-INDEX

5
(FIVE YEARS 0)

2019 ◽  
Author(s):  
Arto Anttila ◽  
Scott Borgeson ◽  
Giorgio Magri

2017 ◽  
Vol 48 (3) ◽  
pp. 349-388
Author(s):  
Paul Boersma ◽  
Jan-Willem van Leussen

In multilevel parallel Optimality Theory grammars, the number of candidates (possible paths from the input to the output level) increases exponentially with the number of levels of representation. The problem with this is that with the customary strategy of listing all candidates in a tableau, the computation time for evaluation (i.e., choosing the winning candidate) and learning (i.e., reranking the constraints on the basis of language data) increases exponentially with the number of levels as well. This article proposes instead to collect the candidates in a graph in which the number of nodes and the number of connections increase only linearly with the number of levels of representation. As a result, there exist procedures for evaluation and learning that increase only linearly with the number of levels. These efficient procedures help to make multilevel parallel constraint grammars more feasible as models of human language processing. We illustrate visualization, evaluation, and learning with a toy grammar for a traditional case that has already previously been analyzed in terms of parallel evaluation, namely, French liaison.


Author(s):  
Jean-François Hüe ◽  
J.-H. Jayez
Keyword(s):  

Phonology ◽  
2008 ◽  
Vol 25 (2) ◽  
pp. 217-270 ◽  
Author(s):  
Paul Boersma ◽  
Silke Hamann

This paper reconciles the standpoint that language users do not aim at improving their sound systems with the observation that languages seem to improve their sound systems. If learners optimise their perception by gradually ranking their cue constraints, and reuse the resulting ranking in production, they automatically introduce aprototype effect, which can be counteracted by anarticulatory effect. If the two effects are of unequal size, the learner will end up with a sound system auditorily different from that of her language environment. Computer simulations of sibilant inventories show that, independently of the initial auditory sound system, a stable equilibrium is reached within a small number of generations. In this stable state, the dispersion of the sibilants of the language strikes an optimal balance between articulatory ease and auditory contrast. Crucially, these results are derived within a model without any goal-oriented elements such as dispersion constraints.


2008 ◽  
Vol 34 (1) ◽  
pp. 81-124 ◽  
Author(s):  
Aoife Cahill ◽  
Michael Burke ◽  
Ruth O'Donovan ◽  
Stefan Riezler ◽  
Josef van Genabith ◽  
...  

A number of researchers have recently conducted experiments comparing “deep” hand-crafted wide-coverage with “shallow” treebank- and machine-learning-based parsers at the level of dependencies, using simple and automatic methods to convert tree output generated by the shallow parsers into dependencies. In this article, we revisit such experiments, this time using sophisticated automatic LFG f-structure annotation methodologies with surprising results. We compare various PCFG and history-based parsers to find a baseline parsing system that fits best into our automatic dependency structure annotation technique. This combined system of syntactic parser and dependency structure annotation is compared to two hand-crafted, deep constraint-based parsers, RASP and XLE. We evaluate using dependency-based gold standards and use the Approximate Randomization Test to test the statistical significance of the results. Our experiments show that machine-learning-based shallow grammars augmented with sophisticated automatic dependency annotation technology outperform hand-crafted, deep, wide-coverage constraint grammars. Currently our best system achieves an f-score of 82.73% against the PARC 700 Dependency Bank, a statistically significant improvement of 2.18% over the most recent results of 80.55% for the hand-crafted LFG grammar and XLE parsing system and an f-score of 80.23% against the CBS 500 Dependency Bank, a statistically significant 3.66% improvement over the 76.57% achieved by the hand-crafted RASP grammar and parsing system.


Author(s):  
Christer Samuelsson ◽  
Pasi Tapanainen ◽  
Atro Voutilainen
Keyword(s):  

Author(s):  
Robert P. Futrelle ◽  
Ioannis A. Kakadiaris
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document