Parsing with Situation Semantics

1991 ◽  
Vol 14 (2) ◽  
pp. 141-189
Author(s):  
Thomas Polzin ◽  
Hannes Rieser

This paper integrates several related lines of research in an implemented model. Its main aim is to show how principles of situation semantics concerning meanings, constraints and the preferred ontology can be represented and mapped onto expressions of natural language in a straightforward way. For assembling larger chunks of information a unification-based approach is used. The semantics is grafted upon a shift- reduce parser which does the main work in associating expressions with meanings. In order to capture the much debated difference between sentence and utterance meaning the whole machinery provides first an abstract meaning (conceived as a constraint) where the parameters are non-anchored. Subsequently, a model in the technical sense provides anchors for parameters and thus yields the utterance meaning of the sentence parsed. Finally, it is checked whether this semantic representation of the parsing result can be regarded as a genuine situation semantic object. This is done by showing that it confirms to the axioms of a situation theoretic model. As a result, parses are far more constrained and theory-guided than usual. The idea of parsing used goes back to work originally done by Barwise and Perry, the coding of semantic entities owes much to proposals issued by K. Devlin and D. Westerdåhl. The whole model is implemented in PROLOG

2019 ◽  
Vol 5 (1) ◽  
Author(s):  
Jens Nevens ◽  
Paul Van Eecke ◽  
Katrien Beuls

AbstractIn order to be able to answer a natural language question, a computational system needs three main capabilities. First, the system needs to be able to analyze the question into a structured query, revealing its component parts and how these are combined. Second, it needs to have access to relevant knowledge sources, such as databases, texts or images. Third, it needs to be able to execute the query on these knowledge sources. This paper focuses on the first capability, presenting a novel approach to semantically parsing questions expressed in natural language. The method makes use of a computational construction grammar model for mapping questions onto their executable semantic representations. We demonstrate and evaluate the methodology on the CLEVR visual question answering benchmark task. Our system achieves a 100% accuracy, effectively solving the language understanding part of the benchmark task. Additionally, we demonstrate how this solution can be embedded in a full visual question answering system, in which a question is answered by executing its semantic representation on an image. The main advantages of the approach include (i) its transparent and interpretable properties, (ii) its extensibility, and (iii) the fact that the method does not rely on any annotated training data.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Leilei Kong ◽  
Zhongyuan Han ◽  
Yong Han ◽  
Haoliang Qi

Paraphrase identification is central to many natural language applications. Based on the insight that a successful paraphrase identification model needs to adequately capture the semantics of the language objects as well as their interactions, we present a deep paraphrase identification model interacting semantics with syntax (DPIM-ISS) for paraphrase identification. DPIM-ISS introduces the linguistic features manifested in syntactic features to produce more explicit structures and encodes the semantic representation of sentence on different syntactic structures by means of interacting semantics with syntax. Then, DPIM-ISS learns the paraphrase pattern from this representation interacting the semantics with syntax by exploiting a convolutional neural network with convolution-pooling structure. Experiments are conducted on the corpus of Microsoft Research Paraphrase (MSRP), PAN 2010 corpus, and PAN 2012 corpus for paraphrase plagiarism detection. The experimental results demonstrate that DPIM-ISS outperforms the classical word-matching approaches, the syntax-similarity approaches, the convolution neural network-based models, and some deep paraphrase identification models.


1979 ◽  
Vol 15 (1) ◽  
pp. 39-47 ◽  
Author(s):  
Geoffrey Sampson

Many contemporary linguists hold that an adequate description of a natural language must represent many of its vocabulary items as syntactically and/or semantically complex. A sentence containing the word kill, for instance, will on this view be assigned a ‘deep syntactic structure’ or ‘semantic representation’ in which kill is represented by a portion or portions of tree-structure, the lowest nodes of which are labelled with ‘semantic primitives’ such as CAUSE and DIE, or CAUSE, BECOME, NOT and ALIVE. In the case of words such as cats or walked, which are formed in accordance with productive rules of ‘inflexional’ rather than ‘derivational’ morphology, there is little dispute that their composite status will be reflected at most or all levels of linguistic representation. (That is why I refer, above, to ‘vocabulary items’: cat and cats may be called different ‘words’, but not different elements of the English vocbulary.) When morphologically simple words such as kill are treated as composite at a ‘deeper’ level, I, for one, find my credulity strained to breaking point. (The case of words formed in accordance with productive or non-productive rules of derivational morphology, such as killer or kingly, is an intermediate one and I shall briefly return to it below.)


2001 ◽  
Vol 37 (2) ◽  
pp. 287-312 ◽  
Author(s):  
ELLEN THOMPSON

This article explores the interface between the syntactic and semantic representation of natural language with respect to the interpretation of time. The main claim of the paper is that the semantic relationship of temporal dependency requires syntactic locality at LF. Based on this claim, I explore the syntax and semantics of gerundive relative clauses. I argue that since gerundive relatives are temporally dependent on the tense of the main clause, they need to be local with a temporal element of the main clause at LF. I show that gerundive relatives receive different temporal interpretations depending on their syntactic position at LF. This analysis sheds light on the behavior of gerundive relatives in constructions involving coordination, existential there, scope of quantificational and cardinality adverbials, extraposition, presuppositionality effects and binding-theoretic reconstruction effects.


Author(s):  
Mark Steedman

Linguists and philosophers since Aristotle have attempted to reduce natural language semantics in general, and the semantics of eventualities in particular, to a ‘language of mind’, expressed in terms of various collections of underlying language-independent primitive concepts. While such systems have proved insightful enough to suggest that such a universal conceptual representation is in some sense psychologically real, the primitive relations proposed, based on oppositions like agent-patient, event-state, etc., have remained incompletely convincing. This chapter proposes that the primitive concepts of the language of mind are ‘hidden’, or latent, and must be discovered automatically by detecting consistent patterns of entailment in the vast amounts of text that are made available by the internet using automatic syntactic parsers and machine learning to mine a form- and language-independent semantic representation language for natural language semantics. The representations involved combine a distributional representation of ambiguity with a language of logical form.


Sign in / Sign up

Export Citation Format

Share Document