Context-Free Semantics

Author(s):  
Paolo Santorio

On a traditional view, the semantics of natural language makes essential use of a context parameter, i.e. a set of coordinates that representss the situation of speech. In classical frameworks, this parameter plays two roles: it contributes to determining the content of utterances and it is used to define logical consequence. This paper argues that recent empirical proposals about context shift in natural language, which are supported by an increasing body of cross-linguistic data, are incompatible with this traditional view. The moral is that context has no place in semantic theory proper. We should revert back to so-called multiple-indexing frameworks that were developed by Montague and others, and relegate context to the postsemantic stage of a theory of meaning.

Author(s):  
John Carroll

This chapter introduces key concepts and techniques for natural-language parsing: that is, finding the grammatical structure of sentences. The chapter introduces the fundamental algorithms for parsing with context-free (CF) phrase structure grammars, how these deal with ambiguous grammars, and how CF grammars and associated disambiguation models can be derived from syntactically annotated text. It goes on to consider dependency analysis, and outlines the main approaches to dependency parsing based both on manually written grammars and on learning from text annotated with dependency structures. It finishes with an overview of techniques used for parsing with grammars that use feature structures to encode linguistic information.


Author(s):  
Alfonso Ortega ◽  
Emilio del Rosal ◽  
Diana Pérez ◽  
Robert Mercaş ◽  
Alexander Perekrestenko ◽  
...  

Author(s):  
James Higginbotham

This chapter outlines the problem of framing a theory of the temporal indicators of natural language in all their complexity and, in particular, of understanding the interaction of linguistic and contextual elements. It describes how the phenomenon of sequence of tense shows that tense logic is too limited, since it excludes the cross-reference typical of bound variables; it suggests instead that the tenses express temporal relations between events conceived as in Davidson. The particular discussion leads to the general question of the form of truth conditions for sentences in an indexical language. The discussion advocates conditional truth conditions, in which an antecedent clause spells out the import of the indexical elements. It goes on to describe two notions of a model for a language with such truth conditions, the notions varying as to whether the satisfaction of such antecedents is incorporated, and thus diverging in their conceptions of logical consequence.


2020 ◽  
pp. 21-36
Author(s):  
Cameron Domenico Kirk-Giannini ◽  
Ernie Lepore

Davidson’s first lecture begins with a discussion of the theoretical importance of the notion of speaking the truth, continues with a characterization of the structure of an adequate semantic theory, and concludes with some remarks on the connections between truth-theoretic semantics and the underlying levels of representation posited by syntacticians in the generative tradition. What is special about speaking the truth, Davidson claims, is that anyone who is competent with a language and who knows the relevant facts about the world is in a position to know whether a speaker of that language speaks the truth on any given occasion. It is no surprise, then, that truth is central to Davidson’s conception of semantics: for much of the lecture, he defends the claim that a Tarskian truth theory can serve as the basis for a theory of meaning. At the end of the lecture, he suggests that the logical forms associated with sentences by an empirically supported Tarskian truth theory for a language can be identified with the Chomskyan deep structures of those sentences.


Author(s):  
John Carroll

This article introduces the concepts and techniques for natural language (NL) parsing, which signifies, using a grammar to assign a syntactic analysis to a string of words, a lattice of word hypotheses output by a speech recognizer or similar. The level of detail required depends on the language processing task being performed and the particular approach to the task that is being pursued. This article further describes approaches that produce ‘shallow’ analyses. It also outlines approaches to parsing that analyse the input in terms of labelled dependencies between words. Producing hierarchical phrase structure requires grammars that have at least context-free (CF) power. CF algorithms that are widely used in parsing of NL are described in this article. To support detailed semantic interpretation more powerful grammar formalisms are required, but these are usually parsed using extensions of CF parsing algorithms. Furthermore, this article describes unification-based parsing. Finally, it discusses three important issues that have to be tackled in real-world applications of parsing: evaluation of parser accuracy, parser efficiency, and measurement of grammar/parser coverage.


2017 ◽  
Vol 34 (2) ◽  
pp. 397-417 ◽  
Author(s):  
Hadas Kotek

Abstract In wh-questions, intervention effects are detected whenever certain elements – focus-sensitive operators, negative elements, and quantifiers – c-command an in-situ wh-word. Pesetsky (2000, Phrasal movement and its kin. Cambridge, MA: MIT Press) presents a comprehensive study of intervention effects in English multiple wh-questions, arguing that intervention correlates with superiority: superiority-violating questions are subject to intervention effects, while superiority-obeying questions are immune from such effects. This description has been adopted as an explanandum in most recent work on intervention, such as Beck (2006, Intervention effects follow from focus interpretation. Natural Language Semantics 14. 1–56) and Cable (2010, The Grammar of Q: Q-particles, wh-movement, and pied-piping. Oxford University Press), a.o. In this paper, I show instead that intervention effects in English questions correlate with the available LF positions for wh-in-situ and the intervener, but not with superiority. The grammar allows for several different ways of repairing intervention configurations, including wh-movement, scrambling, Quantifier Raising, and reconstruction. Intervention effects are observed when none of these repair strategies are applicable, and there is no way of avoiding the intervention configuration – regardless of superiority. Nonetheless, I show that these results are consistent with the syntax proposed for English questions in Pesetsky (2000, Phrasal movement and its kin. Cambridge, MA: MIT Press) and with the semantic theory of intervention effects in Beck (2006, Intervention effects follow from focus interpretation. Natural Language Semantics 14. 1–56).


Sign in / Sign up

Export Citation Format

Share Document