scholarly journals Beyond robotic speech: mutual benefits to cognitive psychology and artificial intelligence from the study of multimodal communication

Author(s):  
Beata Grzyb ◽  
Gabriella Vigliocco

Language has predominately been studied as a unimodal phenomenon - as speech or text without much consideration of its physical and social context – this is true both in cognitive psychology/psycholinguistics as well as in artificial intelligence. However, in everyday life, language is most often used in face-to-face communication and in addition to structured speech it comprises a dynamic system of multiplex components such as gestures, eye gaze, mouth movements and prosodic modulation. Recently, cognitive scientists have started to realise the potential importance of multimodality for the understanding of human communication and its neural underpinnings; while AI scientists have begun to address how to integrate multimodality in order to improve communication between human and artificial embodied agent. We review here the existing literature on multimodal language learning and processing in humans and the literature on perception of artificial agents, their comprehension and production of multimodal cues and we discuss their main limitations. We conclude by arguing that by joining forces AI scientists can improve the effectiveness of human-machine interaction and increase the human-likeness and acceptance of embodied agents in society. In turn, computational models that generate language in artificial embodied agents constitute a unique research tool to investigate the underlying mechanisms that govern language processing and learning in humans.

2021 ◽  
pp. 274-294
Author(s):  
Beata Grzyb ◽  
Gabriella Vigliocco

Recently, cognitive scientists have started to realise the potential importance of multimodality for the understanding of human communication and its neural underpinnings; while AI scientists have begun to address how to integrate multimodality in order to improve communication between human and embodied agent. We review here the existing literature on multimodal language learning and processing in humans and the literature on perception of embodied agents, their comprehension and production of multimodal cues and we discuss their main limitations. We conclude by arguing that by joining forces AI scientists can improve the effectiveness of human-machine interaction and increase the human-likeness and acceptance of embodied agents in society. In turn, computational models that generate language in artificial embodied agents constitute a unique research tool to investigate the underlying mechanisms that govern language processing and learning in humans.


2018 ◽  
Vol 8 (5) ◽  
pp. 259
Author(s):  
Mohammed Ali

In this study, the researcher has advocated the importance of human intelligence in language learning since software or any Learning Management System (LMS) cannot be programmed to understand the human context as well as all the linguistic structures contextually. This study examined the extent to which language learning is perilous to machine learning and its programs such as Artificial Intelligence (AI), Pattern Recognition, and Image Analysis used in much assistive learning techniques such as voice detection, face detection and recognition, personalized assistants, besides language learning programs. The researchers argue that language learning is closely associated with human intelligence, human neural networks and no computers or software can claim to replace or replicate those functions of human brain. This study thus posed a challenge to natural language processing (NLP) techniques that claimed having taught a computer how to understand the way humans learn, to understand text without any clue or calculation, to realize the ambiguity in human languages in terms of the juxtaposition between the context and the meaning, and also to automate the language learning process between computers and humans. The study cites evidence of deficiencies in such machine learning software and gadgets to prove that in spite of all technological advancements there remain areas of human brain and human intelligence where a computer or its software cannot enter. These deficiencies highlight the limitations of AI and super intelligence systems of machines to prove that human intelligence would always remain superior.


Author(s):  
Reyhan Aydoğan ◽  
Tim Baarslag ◽  
Enrico Gerding

AbstractConflict resolution is essential to obtain cooperation in many scenarios such as politics and business, as well as our day to day life. The importance of conflict resolution has driven research in many fields like anthropology, social science, psychology, mathematics, biology and, more recently, in artificial intelligence. Computer science and artificial intelligence have, in turn, been inspired by theories and techniques from these disciplines, which has led to a variety of computational models and approaches, such as automated negotiation, group decision making, argumentation, preference aggregation, and human-machine interaction. To bring together the different research strands and disciplines in conflict resolution, the Workshop on Conflict Resolution in Decision Making (COREDEMA) was organized. This special issue benefited from the workshop series, and consists of significantly extended and revised selected papers from the ECAI 2016 COREDEMA workshop, as well as completely new contributions.


2019 ◽  
Vol 22 (04) ◽  
pp. 655-656
Author(s):  
JUBIN ABUTALEBI ◽  
HARALD CLAHSEN

The cognitive architecture of human language processing has been studied for decades, but using computational modeling for such studies is a relatively recent topic. Indeed, computational approaches to language processing have become increasingly popular in our field, mainly due to advances in computational modeling techniques and the availability of large collections of experimental data. Language learning, particularly child language learning, has been the subject of many computational models. By simulating the process of child language learning, computational models may indeed teach us which linguistic representations are learnable from the input that children have access to (and which are not), as well as which mechanisms yield the same patterns of behavior that are found in children's language performance.


AI Matters ◽  
2021 ◽  
Vol 7 (1) ◽  
pp. 18-20
Author(s):  
Kartik Talamadupula

The marriage of Artificial Intelligence (AI) techniques to problems surrounding the generation, maintenance, and use of source code has come to the fore in recent years as an important AI application area1. A large chunk of this recent attention can be attributed to contemporaneous advancements in Natural Language Processing (NLP) techniques and sub-fields. The naturalness hypothesis, which states that "software is a form of human communication" and that code exhibits patterns that are similar to (human) natural languages (Devanbu, 2015; Hindle, Barr, Gabel, Su, & Devanbu, 2016), has allowed for the application of many of these NLP advances to code-centric usecases. This development has contributed to a spate of work in the community --- much of it captured in a survey by Allamanis, Barr, Devanbu, and Sutton (2018) that focuses on classifying these approaches by the type of probabilistic model applied to source code. This increase in the variety of AI techniques applied to source code has found various manifestations in the industry at large. Code and software form the backbone that underpins almost all modern technical advancements: it is thus natural that breakthroughs in this area should reflect in the emergence of real world deployments.


2019 ◽  
Author(s):  
Matthew A Kelly ◽  
David Reitter

What role does the study of natural language play in the task of developing a unified theory and common model of cognition? Language is perhaps the most complex behaviour that humans exhibit, and, as such, is one of the most difficult problems for understanding human cognition. Linguistic theory can both inform and be informed by unified models of cognition. We discuss (1) how computational models of human cognition can provide insight into how humans produce and comprehend language and (2) how the problem of modelling language processing raises questions and creates challenges for widely used computational models of cognition. Evidence from the literature suggests that behavioural phenomena, such as recency and priming effects, and cognitive constraints, such as working memory limits, affect how language is produced by humans in ways that can be predicted by computational cognitive models. But just as computational models can provide new insights into language, language can serve as a test for these models. For example, simulating language learning requires the use of more powerful machine learning techniques, such as deep learning and vector symbolic architectures, and language comprehension requires a capacity for on-the-fly situational model construction. In sum, language plays an important role in both shaping the development of a common model of the mind, and, in turn, the theoretical understanding of language stands to benefit greatly from the development of a common model.


2012 ◽  
Vol 367 (1598) ◽  
pp. 1971-1983 ◽  
Author(s):  
Karl Magnus Petersson ◽  
Peter Hagoort

The human capacity to acquire language is an outstanding scientific challenge to understand. Somehow our language capacities arise from the way the human brain processes, develops and learns in interaction with its environment. To set the stage, we begin with a summary of what is known about the neural organization of language and what our artificial grammar learning (AGL) studies have revealed. We then review the Chomsky hierarchy in the context of the theory of computation and formal learning theory. Finally, we outline a neurobiological model of language acquisition and processing based on an adaptive, recurrent, spiking network architecture. This architecture implements an asynchronous, event-driven, parallel system for recursive processing. We conclude that the brain represents grammars (or more precisely, the parser/generator) in its connectivity, and its ability for syntax is based on neurobiological infrastructure for structured sequence processing. The acquisition of this ability is accounted for in an adaptive dynamical systems framework. Artificial language learning (ALL) paradigms might be used to study the acquisition process within such a framework, as well as the processing properties of the underlying neurobiological infrastructure. However, it is necessary to combine and constrain the interpretation of ALL results by theoretical models and empirical studies on natural language processing. Given that the faculty of language is captured by classical computational models to a significant extent, and that these can be embedded in dynamic network architectures, there is hope that significant progress can be made in understanding the neurobiology of the language faculty.


1998 ◽  
Vol 20 (3) ◽  
pp. 445-446
Author(s):  
Mark Warschauer

This book presents and discusses efforts to develop Intelligent Computer Assisted Language Learning (ICALL) programs based on advances in artificial intelligence (AI) and natural language processing (NLP). Sixteen of the book's 20 chapters provide descriptions of particular ICALL programs, divided into three categories: text-based language tutors and learning environments, dialogue-based language games, and graphics-based language tutors and learning environments. Four chapters at the end offer general commentary on ICALL from the perspectives of experimental psychology (by Brian MacWhinney), linguistics and AI (by Alan Bailin), second language acquisition theory (by Nina Garret), and educational theory (by Rebecca Oxford).


2019 ◽  
Vol 5 (1) ◽  
pp. 1-7
Author(s):  
Muhammad Nur Adilin Mohd Anuardi ◽  
Atsuko K. Yamazaki

Speech recognition features such as emotion have always been involved in human communication. With the recent developments in the communication methods, researchers have investigated artificial and emotional intelligence to improve communication. This has led to the emergence of affective computing, which deals with processing information pertaining to human emotions. This study aims to determine positive influence of language sounds containing emotion on brain function for improved communication. Twenty-seven college-age Japanese subjects with no prior exposure to the Malay language listened to emotionally toned and emotionally neutral sounds in the Malay language. Their brain activities were measured using near-infrared spectroscopy (NIRS) as they listened to the sounds. A comparison between different NIRS signals revealed that emotionally toned language sounds had a greater impact on brain areas associated with attention and emotion. On the contrary, emotionally neutral Malay sounds affected brain areas involved in working memory and language processing. These results suggest that emotionally-charged sounds initiate listeners’ attention and emotion recognition even when the listeners do not understand the language. The ability to interpret emotions presents challenges in computer systems and robotics; therefore, we hope that our results can be used for the development of computational models of emotion for autonomous robot research in the field of communication.


AI Magazine ◽  
2019 ◽  
Vol 40 (3) ◽  
pp. 67-78
Author(s):  
Guy Barash ◽  
Mauricio Castillo-Effen ◽  
Niyati Chhaya ◽  
Peter Clark ◽  
Huáscar Espinoza ◽  
...  

The workshop program of the Association for the Advancement of Artificial Intelligence’s 33rd Conference on Artificial Intelligence (AAAI-19) was held in Honolulu, Hawaii, on Sunday and Monday, January 27–28, 2019. There were fifteen workshops in the program: Affective Content Analysis: Modeling Affect-in-Action, Agile Robotics for Industrial Automation Competition, Artificial Intelligence for Cyber Security, Artificial Intelligence Safety, Dialog System Technology Challenge, Engineering Dependable and Secure Machine Learning Systems, Games and Simulations for Artificial Intelligence, Health Intelligence, Knowledge Extraction from Games, Network Interpretability for Deep Learning, Plan, Activity, and Intent Recognition, Reasoning and Learning for Human-Machine Dialogues, Reasoning for Complex Question Answering, Recommender Systems Meet Natural Language Processing, Reinforcement Learning in Games, and Reproducible AI. This report contains brief summaries of the all the workshops that were held.


Sign in / Sign up

Export Citation Format

Share Document