Concluding reflections on natural language and embodiment

Author(s):  
Patrick Duffley

This chapter highlights a dimension of embodiment that is often overlooked and that concerns the basic design architecture of human language itself: the ineludable fact that the fundamental relation on which language is based is the association between a mind-engendered meaning and a bodily-produced sign. It is argued that this oversight is often due to treating meaning on the level of the sentence or the construction, rather than of the lower-level linguistic items where the linguistic sign is stored in a stable, permanent, and direct relation with its meaning outside of any particular context. Building linguistic analysis up from the ground level provides it with a solid foundation and increases its explanatory power.

Author(s):  
Patrick Duffley

This book steers a middle course between two opposing conceptions that currently dominate the field of semantics, the logical and cognitive approaches. It brings to light the inadequacies of both frameworks, and argues along with the Columbia School that linguistic semantics must be grounded on the linguistic sign itself and the meaning it conveys across the full range of its uses. The book offers 12 case studies demonstrating the explanatory power of a sign-based semantics, dealing with topics such as complementation with aspectual and causative verbs, control and raising, wh- words, full-verb inversion, and existential-there constructions. It calls for a radical revision of the semantics/pragmatics interface, proposing that the dividing-line be drawn between semiologically-signified notional content (i.e. what is linguistically encoded) and non-semiologically-signified notional content (i.e. what is not encoded but still communicated). This highlights a dimension of embodiment that concerns the basic design architecture of human language itself: the ineludable fact that the fundamental relation on which language is based is the association between a mind-engendered meaning and a bodily produced sign. It is argued that linguistic analysis often disregards this fact and treats meaning on the level of the sentence or the construction, rather than on that of the lower-level linguistic items where the linguistic sign is stored in a stable, permanent, and direct relation with its meaning outside of any particular context. Building linguistic analysis up from the ground level provides it with a more solid foundation and increases its explanatory power.


Author(s):  
Patrick Duffley

This chapter demonstrates the explanatory power of an approach that grounds the analysis of natural-language meaning on the linguistic sign itself. Cases covered include the multifarious uses of the preposition for, verbal complementation with aspectual and causative verbs, the phenomena of control and raising in adjective + to-infinitive constructions, the use of wh- words with the bare and to-infinitives, the modal and non-modal uses of the verbs dare and need, and a meaning-based account of full-verb inversion and existential-there constructions.


2018 ◽  
Vol 14 (3) ◽  
pp. 261-273 ◽  
Author(s):  
Hongwei Jia

Abstract Previous semiotic research classified human signs into linguistic signs and non-linguistic signs, with reference to human language and the writing system as the core members of the sign family. However, this classification cannot cover all the types of translation in the broad sense in terms of sign transformation activities. Therefore, it is necessary to reclassify the signs that make meaning into tangible signs and intangible signs based on the medium of the signs. Whereas tangible signs are attached to the outer medium of the physical world, intangible signs are attached to the inner medium of the human cerebral nervous system. The three types of transformation, which are namely from tangible signs into tangible signs, from tangible signs into intangible signs, and from intangible signs into tangible signs, lay a solid foundation for the categorization of sign activities in translation semiotics. Such a reclassification of signs can not only enrich semiotic theories of sign types, human communication, and sign-text interpretation, but also inspire new research on translation types, the translation process, translators’ thinking systems and psychology, and the mechanism of machine translation.


2019 ◽  
Author(s):  
Edward Gibson ◽  
Richard Futrell ◽  
Steven T. Piantadosi ◽  
Isabelle Dautriche ◽  
Kyle Mahowald ◽  
...  

Cognitive science applies diverse tools and perspectives to study human language. Recently, an exciting body of work has examined linguistic phenomena through the lens of efficiency in usage: what otherwise puzzling features of language find explanation in formal accounts of how language might be optimized for communication and learning? Here, we review studies that deploy formal tools from probability and information theory to understand how and why language works the way that it does, focusing on phenomena ranging from the lexicon through syntax. These studies show how apervasive pressure for efficiency guides the forms of natural language and indicate that a rich future for language research lies in connecting linguistics to cognitive psychology and mathematical theories of communication and inference.


Author(s):  
Thomas J. Marlowe

Classical (Aristotelean or Boolean) logics provide a solid foundation for mathematical reasoning, but are limited in expressivity and necessarily incomplete. Effective understanding of logic in the modern world entails for the instructor and advanced students an understanding of the wider context. This chapter surveys standard extensions used in mathematical reasoning, artificial intelligence and cognitive science, and natural language reasoning and understanding, as well as inherent limitations on reasoning and computing. Initial technical extensions include equality of terms, integer arithmetic and quantification over sets and relations. To deal with natural reasoning, the chapter explores temporal and modal logics, fuzzy logic and probabilistic models, and relevance logic. Finally, the chapter considers limitations to logic and knowledge, via an overview of the fundamental results of Turing, Gödel, and others, and their connection to the state of mathematics, computing and science in the modern world.


1972 ◽  
Vol 182 (1068) ◽  
pp. 255-276 ◽  

Human language is a uniquely rewarding subject of psychological investigation because of the richness of its structure and its wide expressive power; and the ability to communicate in language is a skill which is possessed by almost all adult human beings. But the scientific study of language calls for appropriate modes of description; and the concept of an algorithm enables one to relate the phenomena of language to those of behaviour in general. A useful paradigm is to be found in computing science, where algorithms are expressed as programs written in specially designed languages. Like computer languages, natural languages have both a syntactic and a semantic aspect; and human utterances can be viewed as programs to be implemented by the hearer. This idea has been used for the development of computer programs with which one can converse in simple English.


2013 ◽  
Vol 21 (2) ◽  
pp. 167-200 ◽  
Author(s):  
SEBASTIAN PADÓ ◽  
TAE-GIL NOH ◽  
ASHER STERN ◽  
RUI WANG ◽  
ROBERTO ZANOLI

AbstractA key challenge at the core of many Natural Language Processing (NLP) tasks is the ability to determine which conclusions can be inferred from a given natural language text. This problem, called theRecognition of Textual Entailment (RTE), has initiated the development of a range of algorithms, methods, and technologies. Unfortunately, research on Textual Entailment (TE), like semantics research more generally, is fragmented into studies focussing on various aspects of semantics such as world knowledge, lexical and syntactic relations, or more specialized kinds of inference. This fragmentation has problematic practical consequences. Notably, interoperability among the existing RTE systems is poor, and reuse of resources and algorithms is mostly infeasible. This also makes systematic evaluations very difficult to carry out. Finally, textual entailment presents a wide array of approaches to potential end users with little guidance on which to pick. Our contribution to this situation is the novel EXCITEMENT architecture, which was developed to enable and encourage the consolidation of methods and resources in the textual entailment area. It decomposes RTE into components with strongly typed interfaces. We specify (a) a modular linguistic analysis pipeline and (b) a decomposition of the ‘core’ RTE methods into top-level algorithms and subcomponents. We identify four major subcomponent types, including knowledge bases and alignment methods. The architecture was developed with a focus on generality, supporting all major approaches to RTE and encouraging language independence. We illustrate the feasibility of the architecture by constructing mappings of major existing systems onto the architecture. The practical implementation of this architecture forms the EXCITEMENT open platform. It is a suite of textual entailment algorithms and components which contains the three systems named above, including linguistic-analysis pipelines for three languages (English, German, and Italian), and comprises a number of linguistic resources. By addressing the problems outlined above, the platform provides a comprehensive and flexible basis for research and experimentation in textual entailment and is available as open source software under the GNU General Public License.


Sign in / Sign up

Export Citation Format

Share Document