scholarly journals Revisiting Human-Agent Communication: The Importance of Joint Co-construction and Understanding Mental States

2021 ◽  
Vol 12 ◽  
Author(s):  
Stefan Kopp ◽  
Nicole Krämer

The study of human-human communication and the development of computational models for human-agent communication have diverged significantly throughout the last decade. Yet, despite frequently made claims of “super-human performance” in, e.g., speech recognition or image processing, so far, no system is able to lead a half-decent coherent conversation with a human. In this paper, we argue that we must start to re-consider the hallmarks of cooperative communication and the core capabilities that we have developed for it, and which conversational agents need to be equipped with: incremental joint co-construction and mentalizing. We base our argument on a vast body of work on human-human communication and its psychological processes that we reason to be relevant and necessary to take into account when modeling human-agent communication. We contrast those with current conceptualizations of human-agent interaction and formulate suggestions for the development of future systems.

2015 ◽  
Vol 13 (2) ◽  
pp. 461-477 ◽  
Author(s):  
Chloé Clavel

Affective Computing aims at improving the naturalness of human-computer interactions by integrating the socio-emotional component in the interaction. The use of embodied conversational agents (ECAs) – virtual characters interacting with humans – is a key answer to this issue. On the one hand, the ECA has to take into account the human emotional behaviours and social attitudes. On the other hand, the ECA has to display socio-emotional behaviours with relevance. In this paper, we provide an overview of computational methods used for user’s socio-emotional behaviour analysis and of human-agent interaction strategies by questioning the ambivalent status of surprise. We focus on the computational models and on the methods we use to detect user’s emotion through language and speech processing and present a study investigating the role of surprise in the ECA’s answer.


2007 ◽  
Vol 8 (3) ◽  
pp. 391-410 ◽  
Author(s):  
Justine Cassell ◽  
Andrea Tartaro

What is the hallmark of success in human–agent interaction? In animation and robotics, many have concentrated on the looks of the agent — whether the appearance is realistic or lifelike. We present an alternative benchmark that lies in the dyad and not the agent alone: Does the agent’s behavior evoke intersubjectivity from the user? That is, in both conscious and unconscious communication, do users react to behaviorally realistic agents in the same way they react to other humans? Do users appear to attribute similar thoughts and actions? We discuss why we distinguish between appearance and behavior, why we use the benchmark of intersubjectivity, our methodology for applying this benchmark to embodied conversational agents (ECAs), and why we believe this benchmark should be applied to human–robot interaction.


2020 ◽  
Vol 9 (3) ◽  
pp. 1220-1228
Author(s):  
Muhammad Attamimi ◽  
Takashi Omori

One of the biggest challenges in human-agent interaction (HAI) is the development of an agent such as a robot that can understand its partner (a human) and interact naturally. To realize this, a system (agent) should be able to observe a human well and estimate his/her mental state. Towards this goal, in this paper, we present a method of estimating a child's attention, one of the more important human mental states, in a free-play scenario of child-robot interaction (CRI). To realize attention estimation in such CRI scenario, first, we developed a system that could sense a child's verbal and non-verbal multimodal signals such as gaze, facial expression, proximity, and so on. Then, the observed information was used to train a model that is based on a Support Vector Machine (SVM) to estimate a human's attention level. We investigated the accuracy of the proposed method by comparing with a human judge's estimation, and obtained some promising results which we discuss here.


2021 ◽  
Vol 3 ◽  
Author(s):  
Beatrice Biancardi ◽  
Soumia Dermouche ◽  
Catherine Pelachaud

Adaptation is a key mechanism in human–human interaction. In our work, we aim at endowing embodied conversational agents with the ability to adapt their behavior when interacting with a human interlocutor. With the goal to better understand what the main challenges concerning adaptive agents are, we investigated the effects on the user’s experience of three adaptation models for a virtual agent. The adaptation mechanisms performed by the agent take into account the user’s reaction and learn how to adapt on the fly during the interaction. The agent’s adaptation is realized at several levels (i.e., at the behavioral, conversational, and signal levels) and focuses on improving the user’s experience along different dimensions (i.e., the user’s impressions and engagement). In our first two studies, we aim to learn the agent’s multimodal behaviors and conversational strategies to dynamically optimize the user’s engagement and impressions of the agent, by taking them as input during the learning process. In our third study, our model takes both the user’s and the agent’s past behavior as input and predicts the agent’s next behavior. Our adaptation models have been evaluated through experimental studies sharing the same interacting scenario, with the agent playing the role of a virtual museum guide. These studies showed the impact of the adaptation mechanisms on the user’s experience of the interaction and their perception of the agent. Interacting with an adaptive agent vs. a nonadaptive agent tended to be more positively perceived. Finally, the effects of people’s a priori about virtual agents found in our studies highlight the importance of taking into account the user’s expectancies in human–agent interaction.


2021 ◽  
Vol 54 (4) ◽  
pp. 1-43
Author(s):  
Katie Seaborn ◽  
Norihisa P. Miyake ◽  
Peter Pennefather ◽  
Mihoko Otake-Matsuura

Social robots, conversational agents, voice assistants, and other embodied AI are increasingly a feature of everyday life. What connects these various types of intelligent agents is their ability to interact with people through voice. Voice is becoming an essential modality of embodiment, communication, and interaction between computer-based agents and end-users. This survey presents a meta-synthesis on agent voice in the design and experience of agents from a human-centered perspective: voice-based human--agent interaction (vHAI). Findings emphasize the social role of voice in HAI as well as circumscribe a relationship between agent voice and body, corresponding to human models of social psychology and cognition. Additionally, changes in perceptions of and reactions to agent voice over time reveals a generational shift coinciding with the commercial proliferation of mobile voice assistants. The main contributions of this work are a vHAI classification framework for voice across various agent forms, contexts, and user groups, a critical analysis grounded in key theories, and an identification of future directions for the oncoming wave of vocal machines.


Sign in / Sign up

Export Citation Format

Share Document