scholarly journals Negative events in compositional semantics

2018 ◽  
Vol 28 ◽  
pp. 512
Author(s):  
Timothée Bernard ◽  
Lucas Champollion

Negative events have been used in analyses of various natural language phenomena such as negative perception reports and negative causation, but their conceptual and logical foundations remain ill-understood. We propose that linguistic negation denotes a function Neg, which sends any set of events P to a set Neg(P) that contains all events which preclude every event in P from being actual. An axiom ensures that any event in Neg(P) is actual if and only if no event in P is. This allows us to construe the events in Neg(P) as negative, "anti-P", events. We present a syntax-semantics interface that uses continuations to resolve scope mismatches between subject and verb phrase negation, and a fragment of English that accounts for the interaction of negation, the perception verb see, finite and nonfinite perception reports, and quantified subjects, as well as negative causation.

Author(s):  
Nicholas Asher

Anaphora describes a dependence of the interpretation of one natural language expression on the interpretation of another natural language expression. For example, the pronoun ‘her’ in (1) below is anaphorically dependent for its interpretation on the interpretation of the noun phrase ‘Sally’ because ‘her’ refers to the same person ‘Sally’ refers to. - (1) Sally likes her car. As (2) below illustrates, anaphoric dependencies also occur across sentences, making anaphora a ‘discourse phenomenon’: - (2) A farmer owned a donkey. He beat it. The analysis of anaphoric dependence has been the focus of a great deal of study in linguistics and philosophy. Anaphoric dependencies are difficult to accommodate within the traditional conception of compositional semantics of Tarski and Montague precisely because the meaning of anaphoric elements is dependent on other elements of the discourse. Many expressions can be used anaphorically. For instance, anaphoric dependencies hold between the expression ‘one’ and the indefinite noun phrase ‘a labrador’ in (3) below; between the verb phrase ‘loves his mother’ and a ‘null’ anaphor (or verbal auxiliary) in (4); between the prepositional phrase ‘to Paris’ and the lexical item ‘there’ in (5); and between a segment of text and the pronoun ‘it’ in (6). - (3) Susan has a labrador. I want one too. - (4) John loves his mother. Fred does too. - (5) I didn’t go to Paris last year. I don’t go there very often. - (6) One plaintiff was passed over for promotion. Another didn’t get a pay increase for five years. A third received a lower wage than men doing the same work. But the jury didn’t believe any of it. Some philosophers and linguists have also argued that verb tenses generate anaphoric dependencies.


Author(s):  
Yixin Nie ◽  
Yicheng Wang ◽  
Mohit Bansal

Success in natural language inference (NLI) should require a model to understand both lexical and compositional semantics. However, through adversarial evaluation, we find that several state-of-the-art models with diverse architectures are over-relying on the former and fail to use the latter. Further, this compositionality unawareness is not reflected via standard evaluation on current datasets. We show that removing RNNs in existing models or shuffling input words during training does not induce large performance loss despite the explicit removal of compositional information. Therefore, we propose a compositionality-sensitivity testing setup that analyzes models on natural examples from existing datasets that cannot be solved via lexical features alone (i.e., on which a bag-of-words model gives a high probability to one wrong label), hence revealing the models’ actual compositionality awareness. We show that this setup not only highlights the limited compositional ability of current NLI models, but also differentiates model performance based on design, e.g., separating shallow bag-of-words models from deeper, linguistically-grounded tree-based models. Our evaluation setup is an important analysis tool: complementing currently existing adversarial and linguistically driven diagnostic evaluations, and exposing opportunities for future work on evaluating models’ compositional understanding.


2019 ◽  
Vol 29 (06) ◽  
pp. 783-809
Author(s):  
Jules Hedges ◽  
Mehrnoosh Sadrzadeh

AbstractCategorical compositional distributional semantics is a model of natural language; it combines the statistical vector space models of words with the compositional models of grammar. We formalise in this model the generalised quantifier theory of natural language, due to Barwise and Cooper. The underlying setting is a compact closed category with bialgebras. We start from a generative grammar formalisation and develop an abstract categorical compositional semantics for it, and then instantiate the abstract setting to sets and relations and to finite-dimensional vector spaces and linear maps. We prove the equivalence of the relational instantiation to the truth theoretic semantics of generalised quantifiers. The vector space instantiation formalises the statistical usages of words and enables us to, for the first time, reason about quantified phrases and sentences compositionally in distributional semantics.


Author(s):  
Barry Schein

With events as dense as time, negation threatens to be trivial, unless ‘not’ is noughtly, an adverb of quantification. So revised, classical puzzles of negation in natural language are revisited, in which deviation from the logical connective, violating Excluded Middle, appears to prompt a special condition or special meaning. The language of events also contains negative event descriptions—After the flood, it not drying out ruined the basement and one could smell it not drying out—and these appear to founder on the logic of the constructions in which they occur and on reference to suspect negative events, events of not drying out. A language for event semantics with ‘not’ as noughtly resolves the puzzles surveyed—within classical logic, without ambiguity or special conditions on the meaning of ‘not’, and without a metaphysics of negative events.


Author(s):  
Friederike Moltmann

Natural language ontology is a branch of both metaphysics and linguistic semantics. Its aim is to uncover the ontological categories, notions, and structures that are implicit in the use of natural language, that is, the ontology that a speaker accepts when using a language. Natural language ontology is part of “descriptive metaphysics,” to use Strawson’s term, or “naive metaphysics,” to use Fine’s term, that is, the metaphysics of appearances as opposed to foundational metaphysics, whose interest is in what there really is. What sorts of entities natural language involves is closely linked to compositional semantics, namely what the contribution of occurrences of expressions in a sentence is taken to be. Most importantly, entities play a role as semantic values of referential terms, but also as implicit arguments of predicates and as parameters of evaluation. Natural language appears to involve a particularly rich ontology of abstract, minor, derivative, and merely intentional objects, an ontology many philosophers are not willing to accept. At the same time, a serious investigation of the linguistic facts often reveals that natural language does not in fact involve the sort of ontology that philosophers had assumed it does. Natural language ontology is concerned not only with the categories of entities that natural language commits itself to, but also with various metaphysical notions, for example the relation of part-whole, causation, material constitution, notions of existence, plurality and unity, and the mass-count distinction. An important question regarding natural language ontology is what linguistic data it should take into account. Looking at the sorts of data that researchers who practice natural language ontology have in fact taken into account makes clear that it is only presuppositions, not assertions, that reflect the ontology implicit in natural language. The ontology of language may be distinctive in that it may in part be driven specifically by language or the use of it in a discourse. Examples are pleonastic entities, discourse referents conceived of as entities of a sort, and an information-based notion of part structure involved in the semantics of plurals and mass nouns. Finally, there is the question of the universality of the ontology of natural language. Certainly, the same sort of reasoning should apply to consider it universal, in a suitable sense, as has been applied for the case of (generative) syntax.


2013 ◽  
Vol 36 (2) ◽  
pp. 287-297
Author(s):  
Paul Isambert

The French manner adverb autrement, as indicated by its morphology, derives a representation of manner from another (autre = other) representation ; the latter may be retrieved either in a subordinate clause following the adverb (autrement que P) or in the context preceding autrement. In the latter case, studied here, an anaphor is performed, and this paper, based on a corpus study, shows how identifying the antecedent is guided by clues, ranging from the verb phrase where autrement occurs to the environing discourse structure. In some cases, those clues are so frequent that one can talk about ‘discourse strategies’ or even collocations. It is thus shown that the adverb’s interpretation obtains not so much from compositional semantics (even helped by context) than from the properties of the constructions where it occurs.


Author(s):  
Peter Pagin

Davidson’s 1965 paper, “Theories of Meaning and Learnable Languages”, has (at least almost) invariably been interpreted, by others and by myself, as arguing that natural languages must have a compositional semantics, or at least a systematic semantics, that can be finitely specified. However, in his reply to me in the Żegleń volume, Davidson denies that compositionality is in any need of an argument. How does this add up? In this paper I consider Davidson’s first three meaning theoretic papers from this perspective. I conclude that Davidson was right in his reply to me that he never took compositionality, or systematic semantics, to be in need of justification. What Davidson had been concerned with, clearly in the 1965 paper and in “Truth and Meaning” from 1967, and to some extent in his Carnap critique from 1963, is (i) that we need a general theory of natural language meaning, (ii) that such a theory should not be in conflict with the learnability of a language, and (iii) that such a theory bring out should how knowledge of a finite number of features of a language suffices for the understanding of all the sentences of that language.


Author(s):  
Gillian Ramchand

Syntax has shown that there is a hierarchical ordering of projections within the verb phrase, although researchers differ with respect to how fine grained they assume the hierarchy to be). This book explores the hierarchy of the verb phrase from a semantic perspective, attempting to derive it from semantically sorted zones in the compositional semantics. The empirical ground is the auxiliary ordering found in the grammar of English. A new theory of semantic zones is proposed and formalized, and explicit semantic and morphological analyses are presented of all the auxiliary constructions of English that derive their rigid order of composition without recourse to lexical item specific ordering statements.


Sign in / Sign up

Export Citation Format

Share Document