Language as a cognitive technology

2002 ◽  
Vol 1 (1) ◽  
pp. 35-61 ◽  
Author(s):  
Marcelo Dascal

Ever since Descartes singled out the ability to use natural language appropriately in any given circumstance as the proof that humans — unlike animals and machines — have minds, an idea that Turing transformed into his well-known test to determine whether machines have intelligence, the close connection between language and cognition has been widely acknowledged, although it was accounted for in quite different ways. Recent advances in natural language processing, as well as attempts to create “embodied conversational agents” which couple language processing with that of its natural bodily correlates (gestures, facial expression and gaze direction), in the hope of developing human-computer interfaces based on natural — rather than formal — language, have again brought to the fore the question of how far we can hope machines to be able to master the cognitive abilities required for language use. In this paper, I approach this issue from a different angle, inquiring whether language can be viewed as a “cognitive technology”, employed by humans as a tool for the performance of certain cognitive tasks. I propose a definition of “cognitive technology” that encompasses both external (or “prosthetic”) and internal cognitive devices. A number of parameters in terms of which a typology of cognitive technologies of both kinds can be sketched is also set forth. It is then argued that inquiring about language’s role in cognition allows us to re-frame the traditional debate about the relationship between language and thought, by examining how specific aspects of language actually influence cognition — as an environment, a resource, or a tool. This perspective helps bring together the contributions of the philosophical “linguistic turn” in epistemology and the incipient “epistemology of cognitive technology” It also permits a more precise and fruitful discussion of the question whether, to what extent, and which of the language-based cognitive technologies we naturally use can be emulated by the kinds of technologies presently or in the foreseeable future available.

Author(s):  
Andrej Zgank ◽  
Izidor Mlakar ◽  
Uros Berglez ◽  
Danilo Zimsek ◽  
Matej Borko ◽  
...  

The chapter presents an overview of human-computer interfaces, which are a crucial element of an ambient intelligence solution. The focus is given to the embodied conversational agents, which are needed to communicate with users in a most natural way. Different input and output modalities, with supporting methods, to process the captured information (e.g., automatic speech recognition, gesture recognition, natural language processing, dialog processing, text to speech synthesis, etc.), have the crucial role to provide the high level of quality of experience to the user. As an example, usage of embodied conversational agent for e-Health domain is proposed.


2008 ◽  
Vol 34 (4) ◽  
pp. 597-614 ◽  
Author(s):  
Trevor Cohn ◽  
Chris Callison-Burch ◽  
Mirella Lapata

Automatic paraphrasing is an important component in many natural language processing tasks. In this article we present a new parallel corpus with paraphrase annotations. We adopt a definition of paraphrase based on word alignments and show that it yields high inter-annotator agreement. As Kappa is suited to nominal data, we employ an alternative agreement statistic which is appropriate for structured alignment tasks. We discuss how the corpus can be usefully employed in evaluating paraphrase systems automatically (e.g., by measuring precision, recall, and F1) and also in developing linguistically rich paraphrase models based on syntactic structure.


Author(s):  
Constantin Orasan ◽  
Ruslan Mitkov

Natural Language Processing (NLP) is a dynamic and rapidly developing field in which new trends, techniques, and applications are constantly emerging. This chapter focuses mainly on recent developments in NLP which could not be covered in other chapters of the Handbook. Topics such as crowdsourcing and processing of large datasets, which are no longer that recent but are widely used and not covered at length in any other chapter, are also presented. The chapter starts by describing how the availability of tools and resources has had a positive impact on the field. The proliferation of user-generated content has led to the emergence of research topics such as sarcasm and irony detection, automatic assessment of user-generated content, and stance detection. All of these topics are discussed in the chapter. The field of NLP is approaching maturity, a fact corroborated by the latest developments in the processing of texts for financial purposes and for helping users with disabilities, two topics that are also discussed here. The chapter presents examples of how researchers have successfully combined research in computer vision and natural language processing to enable the processing of multimodal information, as well as how the latest advances in deep learning have revitalized research on chatbots and conversational agents. The chapter concludes with a comprehensive list of further reading material and additional resources.


This book presents computational interaction as an approach to explaining and enhancing the interaction between humans and information technology. Computational interaction applies abstraction, automation, and analysis to inform our understanding of the structure of interaction and also to inform the design of the software that drives new and exciting human-computer interfaces. The methods of computational interaction allow, for example, designers to identify user interfaces that are optimal against some objective criteria. They also allow software engineers to build interactive systems that adapt their behaviour to better suit individual capacities and preferences. Embedded in an iterative design process, computational interaction has the potential to complement human strengths and provide methods for generating inspiring and elegant designs. Computational interaction does not exclude the messy and complicated behaviour of humans, rather it embraces it by, for example, using models that are sensitive to uncertainty and that capture subtle variations between individual users. It also promotes the idea that there are many aspects of interaction that can be augmented by algorithms. This book introduces computational interaction design to the reader by exploring a wide range of computational interaction techniques, strategies and methods. It explains how techniques such as optimisation, economic modelling, machine learning, control theory, formal methods, cognitive models and statistical language processing can be used to model interaction and design more expressive, efficient and versatile interaction.


Author(s):  
Virginie Goepp ◽  
Nada Matta ◽  
Emmanuel Caillaud ◽  
Françoise Feugeas

AbstractCommunity of Practice (CoP) efficiency evaluation is a great deal in research. Indeed, having the possibility to know if a given CoP is successful or not is essential to better manage it over time. The existing approaches for efficiency evaluation are difficult and time-consuming to put into action on real CoPs. They require either to evaluate subjective constructs making the analysis unreliable, either to work out a knowledge interaction matrix that is difficult to set up. However, these approaches build their evaluation on the fact that a CoP is successful if knowledge is exchanged between the members. It is the case if there are some interactions between the actors involved in the CoP. Therefore, we propose to analyze these interactions through the exchanges of emails thanks to Natural Language Processing. Our approach is systematic and semi-automated. It requires the e-mails exchanged and the definition of the speech-acts that will be retrieved. We apply it on a real project-based CoP: the SEPOLBE research project that involves different expertise fields. It allows us to identify the CoP core group and to emphasize learning processes between members with different backgrounds (Microbiology, Electrochemistry and Civil engineering).


2019 ◽  
Author(s):  
Theo Araujo

Conversational agents in the form of chatbots available in messaging platforms are gaining increasing relevance in our communication environment. Based on natural language processing and generation techniques, they are built to automatically interact with users in several contexts. We present here a tool, the Conversational Agent Research Toolkit (CART), aimed at enabling researchers to create conversational agents for experimental studies. CART integrates existing APIs frequently used in practice and provides functionality that allows researchers to create and manage multiple versions of a chatbot to be used as stimuli in experimental studies. This paper provides an overview of the tool and provides a step-by-step tutorial of to design an experiment with a chatbot.


Algorithms ◽  
2020 ◽  
Vol 13 (10) ◽  
pp. 262
Author(s):  
Fabio Massimo Zanzotto ◽  
Giorgio Satta ◽  
Giordano Cristini

Parsing is a key task in computer science, with applications in compilers, natural language processing, syntactic pattern matching, and formal language theory. With the recent development of deep learning techniques, several artificial intelligence applications, especially in natural language processing, have combined traditional parsing methods with neural networks to drive the search in the parsing space, resulting in hybrid architectures using both symbolic and distributed representations. In this article, we show that existing symbolic parsing algorithms for context-free languages can cross the border and be entirely formulated over distributed representations. To this end, we introduce a version of the traditional Cocke–Younger–Kasami (CYK) algorithm, called distributed (D)-CYK, which is entirely defined over distributed representations. D-CYK uses matrix multiplication on real number matrices of a size independent of the length of the input string. These operations are compatible with recurrent neural networks. Preliminary experiments show that D-CYK approximates the original CYK algorithm. By showing that CYK can be entirely performed on distributed representations, we open the way to the definition of recurrent layer neural networks that can process general context-free languages.


Sign in / Sign up

Export Citation Format

Share Document