Categorial grammar: between natural language and formal logic

Author(s):  
Maciej Kandulski
2020 ◽  
pp. 19-38
Author(s):  
Ash Asudeh ◽  
Gianluca Giorgolo

This chapter aims to introduce sufficient category theory to enable a formal understanding of the rest of the book. It first introduces the fundamental notion of a category. It then introduces functors, which are maps between categories. Next it introduces natural transformations, which are natural ways of mapping between functors. The stage is then set to at last introduces monads, which are defined in terms of functors and natural transformations. The last part of the chapter provides a compositional calculus with monads for natural language semantics (in other words, a logic for working with monads) and then relates the compositional calculus to Glue Semantics and to a very simple categorial grammar for parsing. The chapter ends with some exercises to aid understanding.


2019 ◽  
Vol 29 (06) ◽  
pp. 731-732
Author(s):  
Valeria de Paiva ◽  
Ruy de Queiroz

This volume collects together extended and improved versions of papers presented at the twentysecond Workshop on Logic, Language, Information and, Computation (WoLLIC 2015) held at the School of Informatics and Computing, Indiana University, Bloomington, Indiana. WoLLIC is an annual international forum on inter-disciplinary research involving formal logic, computing and programming theory, and natural language and reasoning. Each meeting includes invited talks and tutorials as well as contributed papers. Contributions were invited on all appropriate subjects of Logic, Language, Information, and Computation, with particular interest in cross-disciplinary topics.


Linguistics ◽  
2019 ◽  
Author(s):  
Glyn Morrill

The term “categorial grammar” refers to a variety of approaches to syntax and semantics in which expressions are categorized by recursively defined types and in which grammatical structure is the projection of the properties of the lexical types of words. In the earliest forms of categorical grammar types are functional/implicational and interact by the logical rule of Modus Ponens. In categorial grammar there are two traditions: the logical tradition that grew out of the work of Joachim Lambek, and the combinatory tradition associated with the work of Mark Steedman. The logical approach employs methods from mathematical logic and situates categorial grammars in the context of substructural logic. The combinatory approach emphasizes practical applicability to natural language processing and situates categorial grammars within extended rewriting systems. The logical tradition interprets the history of categorial grammar as comprising evolution and generalization of basic functional/implicational types into a rich categorial logic suited to the characterization of the syntax and semantics of natural language which is at once logical, formal, computational, and mathematical, reaching a level of formal explicitness not achieved in other grammar formalisms. This is the interpretation of the field that is being made in this article. This research has been partially supported by MINICO project TIN2017–89244-R. Thanks to Stepan Kuznetsov, Oriol Valentín and Sylvain Salvati for comments and suggestions. All errors and shortcomings are the author’s own.


2012 ◽  
Vol 7 ◽  
Author(s):  
Sandra Kübler ◽  
Eric Baucom ◽  
Matthias Scheutz

In this paper, we introduce the syntactic annotation of the CReST corpus, a corpus of natural language dialogues obtained from humans performing a cooperative, remote search task. The corpus contains the speech signals as well as transcriptions of the dialogues, which are additionally annotated for dialogue structure, disfluencies, and for syntax. The syntactic annotation comprises POS annotation, Penn Treebank style constituent annotations, dependency annotations, and combinatory categorial grammar annotations. The corpus is the first of its kind, providing parallel syntactic annotation based on three different grammar formalisms for a dialogue corpus. All three annotations are manually corrected, thus providing a high quality resource for linguistic comparisons, but also for parser evaluation across frameworks.


2014 ◽  
Vol 17 (1) ◽  
pp. 152-190 ◽  
Author(s):  
Friedrich Reinmuth

Using a short excerpt from Anselm’s Responsio as an example, this paper tries to present logical reconstruction as a special type of exegetical interpretation by paraphrase that is subject to (adapted) hermeneutic maxims and presumption rules that govern exegetical interpretation in general. As such, logical reconstruction will be distinguished from the non-interpretative enterprise of formalization and from the development of theories of logical form, which provide a framework in which formalization and reconstruction take place. Yet, even though logical reconstruction is dependent on methods of formalization, it allows us to use formal methods for the analysis and assessment of natural language texts that are not readily formalizable and is thus an important tool when it comes to applying the concepts and methods of formal logic to such texts.


Author(s):  
Hartley Slater

The formal structure of Frege’s ‘concept script’ has been widely adopted in logic text books since his time, even though its rather elaborate symbols have been abandoned for more convenient ones. But there are major difficulties with its formalisation of pronouns, predicates, and propositions, which infect the whole of the tradition which has followed Frege. It is shown first in this paper that these difficulties are what has led to many of the most notable paradoxes associated with this tradition; the paper then goes on to indicate the lines on which formal logic—and also the lambda calculus and set theory—needs to be restructured, to remove the difficulties. Throughout the study of what have come to be known as first-, second-, and higher-order languages, what has been primarily overlooked is that these languages are abstractions. Many well known paradoxes, we shall see, arose because of the elementary level of simplification which has been involved in the abstract languages studied. Straightforward resolutions of the paradoxes immediately appear merely through attention to languages of greater sophistication, notably natural language, of course. The basic problem has been exclusive attention to a theory in place of what it is a theory of, leading to a focus on mathematical manipulation, which ‘brackets off ’ any natural language reading.


2000 ◽  
Vol 17 ◽  
pp. 53-78
Author(s):  
David Dowty

The distinction between COMPLEMENTS and ADJUNCTS has a long tradition in grammatical theory, and it is also included in some way or other in most current formal linguistic theories. But it is a highly vexed distinction, for several reasons, one of which is that no diagnostic criteria have emerged that will reliably distinguish adjuncts from complements in all cases – too many examples seem to "fall into the crack" between the two categories, no matter how theorists wrestle with them. In this paper, I will argue that this empirical diagnostic "problem" is, in fact, precisely what we should expect to find in natural language, when a proper understanding of the adjunct/complement distinction is achieved: the key hypothesis is that a complete grammar should provide a DUAL ANALYSIS of every complement as an adjunct, and potentially, an analysis of any adjunct as a complement. What this means and why it is motivated by linguistic evidence will be discussed in detail.  


Sign in / Sign up

Export Citation Format

Share Document