Evolving Virtual Embodied Agents Using External Artifact Evaluations

Author(s):  
Lesley van Hoek ◽  
Rob Saunders ◽  
Roy de Kleijn
Keyword(s):  
Author(s):  
Zhisheng Huang ◽  
Anton Eliëns ◽  
Cees Visser

2020 ◽  
Author(s):  
Beata Grzyb ◽  
Gabriella Vigliocco

Language has predominately been studied as a unimodal phenomenon - as speech or text without much consideration of its physical and social context – this is true both in cognitive psychology/psycholinguistics as well as in artificial intelligence. However, in everyday life, language is most often used in face-to-face communication and in addition to structured speech it comprises a dynamic system of multiplex components such as gestures, eye gaze, mouth movements and prosodic modulation. Recently, cognitive scientists have started to realise the potential importance of multimodality for the understanding of human communication and its neural underpinnings; while AI scientists have begun to address how to integrate multimodality in order to improve communication between human and artificial embodied agent. We review here the existing literature on multimodal language learning and processing in humans and the literature on perception of artificial agents, their comprehension and production of multimodal cues and we discuss their main limitations. We conclude by arguing that by joining forces AI scientists can improve the effectiveness of human-machine interaction and increase the human-likeness and acceptance of embodied agents in society. In turn, computational models that generate language in artificial embodied agents constitute a unique research tool to investigate the underlying mechanisms that govern language processing and learning in humans.


Sign in / Sign up

Export Citation Format

Share Document