On (in)definite articles: implicatures and (un)grammaticality prediction

1991 ◽  
Vol 27 (2) ◽  
pp. 405-442 ◽  
Author(s):  
John A. Hawkins

Since Paul Grice published ‘Logic and conversation’ in 1975, there have been a number of attempts to develop his programmatic remarks on conversational and conventional implicatures further (see Gazdar, 1979; Atlas & Levinson, 1981; Horn, 1985; Sperber & Wilson, 1986; and especially Levinson, 1983, and the references cited therein). The result has been a growing understanding of the relationship between semantics and pragmatics, and more generally of human reasoning in everyday language use. Many aspects of natural language understanding that were previously thought to be part of the conventional meaning of a given expression can now be shown to be the result of conversational inference. And with cancellability as the diagnostic test, a number of traditional problems in the study of meaning are yielding to more satisfactory analyses. Even more ambitiously, implicatures are penetrating into core areas of the syntax, as pragmatic theories of increasing subtlety are proposed for ‘grammatical’ phenomena such as Chomsky's (1981, 1982) binding principles (see Reinhart, 1983, and Levinson, 1987a, b, 1991).

2021 ◽  
Vol 27 (1) ◽  
pp. 46-63
Author(s):  
Gilberto Gomes

External negation of conditionals occurs in sentences beginning with ‘It is not true that if’ or similar phrases, and it is not rare in natural language. A conditional may also be denied by another with the same antecedent and opposite consequent. Most often, when the denied conditional is implicative, the denying one is concessive, and vice versa. Here I argue that, in natural language pragmatics, ‘If $A$, $\sim B$’ entails ‘$\sim$(if $A, B$)’, but ‘$\sim$(if $A, B$)’ does not entail ‘If $A$, $\sim B$’. ‘If $A, B$’ and ‘If $A$, $\sim B$’ deny each other, but are contraries, not contradictories. Truth conditions that are relevant in human reasoning and discourse often depend not only on semantic but also on pragmatic factors. Examples are provided showing that sentences having the forms ‘$\sim$(if $A, B$)’ and ‘If $A$, $\sim B$’ may have different pragmatic truth conditions. The principle of Conditional Excluded Middle, therefore, does not apply to natural language use of conditionals. Three squares of opposition provide a representation the aforementioned relations.


Author(s):  
Ichiro Kobayashi ◽  

At the annual conference of the Japan Society for Artificial Intelligence (JSAI), a special survival session called "Challenge for Realizing Early Profits (CREP)" is organized to support and promote excellent ideas in new AI technologies expected to be realized and contributed to society within five years. Every year at the session, researchers propose their ideas and compete in being evaluated by conference participants. The Everyday Language Computing (ELC) project, started in 2000 at the Brain Science Institute, RIKEN, and ended in 2005, participated in the CREP program in 2001 to have their project evaluated by third parties and held an organized session every year in which those interested in language-based intelligence and personalization participate. They competed with other candidates, survived the session, and achieved the session's final goal to survive for five years. Papers in this special issue selected for presentation at the session include the following: The first article, "Everyday-Language Computing Project Overview," by Ichiro Kobayashi et al., gives an overview and the basic technologies of the ELC Project. The second to sixth papers are related to the ELC Project. The second article, "Computational Models of Language Within Context and Context-Sensitive Language Understanding," by Noriko Ito et al., proposes a new database, called the "semiotic base," that compiles linguistic resources with contextual information and an algorithm for achieving natural language understanding with the semiotic base. The third article, "Systemic-Functional Context-Sensitive Text Generation in the Framework of Everyday Language Computing," by Yusuke Takahashi et al., proposes an algorithm to generate texts with the semiotic base. The fourth article, "Natural Language-Mediated Software Agentification," by Michiaki Iwazume et al., proposes a method for agentifying and verbalizing existing software applications, together with a scheme for operating/running them. The fifth article, "Smart Help for Novice Users Based on Application Software Manuals," by Shino Iwashita et al., proposes a new framework for reusing electronic software manuals equipped with application software to provide tailor-made operation instructions to users. The sixth article, "Programming in Everyday Language: A Case for Email Management," by Toru Sugimoto et al., making a computer program written in natural language. Rhetorical structure analysis is used to translate the natural language command structure into the program structure. The seventh article, "Application of Paraphrasing to Programming with Linguistic Expressions," by Nozomu Kaneko et al., proposes a method for translating natural language commands into a computer program through a natural language paraphrasing mechanism. The eighth article, "A Human Interface Based on Linguistic Metaphor and Intention Reasoning," by Koichi Yamada et al., proposes a new human interface paradigm called Push Like Talking (PLT), which enables people to operate machines as they talk. The ninth article, "Automatic Metadata Annotation Based on User Preference Evaluation Patterns," by Mari Saito proposes effective automatic metadata annotation for content recommendations matched to user preference. The tenth article, "Dynamic Sense Representation Using Conceptual Fuzzy Sets," by Hiroshi Sekiya et al., proposes a method to represent word senses, which vary dynamically depending on context, using conceptual fuzzy sets. The eleventh article, "Common Sense from the Web? Naturalness of Everyday Knowledge Retrieved from WWW," by Rafal Rzepka et al., is a challenging work to acquire common-sense knowledge from information on the Web. The twelfth article, "Semantic Representation for Understanding Meaning Based on Correspondence Between Meanings," by Akira Takagi et al., proposes a new semantic representation to deal with Japanese language in natural language processing. I thank the reviewers and contributors for their time and effort in making this special issue possible, and I wish to thank the JACIII editorial board, especially Professors Kaoru Hirota and Toshio Fukuda, the Editors-in-Chief, for inviting me to serve as Guest Editor of this Journal. Thanks also go to Kazuki Ohmori and Kenta Uchino of Fuji Technology Press for their sincere support.


1995 ◽  
Vol 34 (01/02) ◽  
pp. 176-186 ◽  
Author(s):  
R. H. Baud ◽  
A. M. Rassinoux ◽  
J. C. Wagner ◽  
C. Lovis ◽  
C. Juge ◽  
...  

Abstract:The analysis of medical narratives and the generation of natural language expressions are strongly dependent on the existence of an adequate representation language. Such a language has to be expressive enough in order to handle the complexity of human reasoning in the domain. Sowa’s Conceptual Graphs (CG) are an answer, and this paper presents a multilingual implementation, using French, English and German. Current developments demonstrate the feasibility of an approach to natural Language Understanding where semantic aspects are dominant, in contrast, to syntax driven methods. The basic idea is to aggregate blocks of words according to semantic compatibility rules, following a method called Proximity Processing. The CG representation is gradually built, starting from single words in a semantic lexicon, to finally give a complete representation of the sentence under the form of a single CG. The process is dependent on specific rules of the medical domain, and for this reason is largely controlled by the declarative knowledge of the medical Linguistic Knowlege Base.


2021 ◽  
Vol 5 (1) ◽  
pp. 51-60
Author(s):  
Istikomah Istikomah ◽  
Nurhayati Nurhayati

Since the era of Greece and Rome in the 4-2 century BC, until this Postmodern one, language has been one of the most central and core issues of philosophical studies. Language and Philosophy both focus on issues related to structure and meaning in natural language, as discussed in the philosophy of language and other disciplines, among others; philosophical theories about meaning and truth, presuppositions, implicatures, speech acts, etc. This article discusses several case studies that illustrate the relationship between the philosophy of language through three branches of linguistics; syntax (Stanley, 2000), semantics (von Fintel, 2001), and pragmatics (Potts, 2005). The results of the study reveal a significance and interdependence between philosophy and language. Philosophy requires language as a means of communicating ideas and also as an object of study in philosophy. Meanwhile, language also badly needs philosophy as a means or method to analyze systematically to get solutions to solving linguistic problems.Keywords: linguistics, philosophy, syntax, semantics, and pragmatics 


Author(s):  
Annie Zaenen

Hearers and readers make inferences on the basis of what they hear or read. These inferences are partly determined by the linguistic form that the writer or speaker chooses to give to her utterance. The inferences can be about the state of the world that the speaker or writer wants the hearer or reader to conclude are pertinent, or they can be about the attitude of the speaker or writer vis-à-vis this state of affairs. The attention here goes to the inferences of the first type. Research in semantics and pragmatics has isolated a number of linguistic phenomena that make specific contributions to the process of inference. Broadly, entailments of asserted material, presuppositions (e.g., factive constructions), and invited inferences (especially scalar implicatures) can be distinguished. While we make these inferences all the time, they have been studied piecemeal only in theoretical linguistics. When attempts are made to build natural language understanding systems, the need for a more systematic and wholesale approach to the problem is felt. Some of the approaches developed in Natural Language Processing are based on linguistic insights, whereas others use methods that do not require (full) semantic analysis. In this article, I give an overview of the main linguistic issues and of a variety of computational approaches, especially those stimulated by the RTE challenges first proposed in 2004.


1998 ◽  
Vol 37 (04/05) ◽  
pp. 327-333 ◽  
Author(s):  
F. Buekens ◽  
G. De Moor ◽  
A. Waagmeester ◽  
W. Ceusters

AbstractNatural language understanding systems have to exploit various kinds of knowledge in order to represent the meaning behind texts. Getting this knowledge in place is often such a huge enterprise that it is tempting to look for systems that can discover such knowledge automatically. We describe how the distinction between conceptual and linguistic semantics may assist in reaching this objective, provided that distinguishing between them is not done too rigorously. We present several examples to support this view and argue that in a multilingual environment, linguistic ontologies should be designed as interfaces between domain conceptualizations and linguistic knowledge bases.


1995 ◽  
Vol 34 (04) ◽  
pp. 345-351 ◽  
Author(s):  
A. Burgun ◽  
L. P. Seka ◽  
D. Delamarre ◽  
P. Le Beux

Abstract:In medicine, as in other domains, indexing and classification is a natural human task which is used for information retrieval and representation. In the medical field, encoding of patient discharge summaries is still a manual time-consuming task. This paper describes an automated coding system of patient discharge summaries from the field of coronary diseases into the ICD-9-CM classification. The system is developed in the context of the European AIM MENELAS project, a natural-language understanding system which uses the conceptual-graph formalism. Indexing is performed by using a two-step processing scheme; a first recognition stage is implemented by a matching procedure and a secondary selection stage is made according to the coding priorities. We show the general features of the necessary translation of the classification terms in the conceptual-graph model, and for the coding rules compliance. An advantage of the system is to provide an objective evaluation and assessment procedure for natural-language understanding.


2018 ◽  
Author(s):  
Sharath Srivatsa ◽  
Shyam Kumar V N ◽  
Srinath Srinivasa

In recent times, computational modeling of narratives has gained enormous interest in fields like Natural Language Understanding (NLU), Natural Language Generation (NLG), and Artificial General Intelligence (AGI). There is a growing body of literature addressing understanding of narrative structure and generation of narratives. Narrative generation is known to be a far more complex problem than narrative understanding [20].


Sign in / Sign up

Export Citation Format

Share Document