Intersubjectivity in human–agent interaction

2007 ◽  
Vol 8 (3) ◽  
pp. 391-410 ◽  
Author(s):  
Justine Cassell ◽  
Andrea Tartaro

What is the hallmark of success in human–agent interaction? In animation and robotics, many have concentrated on the looks of the agent — whether the appearance is realistic or lifelike. We present an alternative benchmark that lies in the dyad and not the agent alone: Does the agent’s behavior evoke intersubjectivity from the user? That is, in both conscious and unconscious communication, do users react to behaviorally realistic agents in the same way they react to other humans? Do users appear to attribute similar thoughts and actions? We discuss why we distinguish between appearance and behavior, why we use the benchmark of intersubjectivity, our methodology for applying this benchmark to embodied conversational agents (ECAs), and why we believe this benchmark should be applied to human–robot interaction.

Technologies ◽  
2018 ◽  
Vol 6 (4) ◽  
pp. 119 ◽  
Author(s):  
Konstantinos Tsiakas ◽  
Maria Kyrarini ◽  
Vangelis Karkaletsis ◽  
Fillia Makedon ◽  
Oliver Korn

In this article, we present a taxonomy in Robot-Assisted Training; a growing body of research in Human–Robot Interaction which focuses on how robotic agents and devices can be used to enhance user’s performance during a cognitive or physical training task. Robot-Assisted Training systems have been successfully deployed to enhance the effects of a training session in various contexts, i.e., rehabilitation systems, educational environments, vocational settings, etc. The proposed taxonomy suggests a set of categories and parameters that can be used to characterize such systems, considering the current research trends and needs for the design, development and evaluation of Robot-Assisted Training systems. To this end, we review recent works and applications in Robot-Assisted Training systems, as well as related taxonomies in Human–Robot Interaction. The goal is to identify and discuss open challenges, highlighting the different aspects of a Robot-Assisted Training system, considering both robot perception and behavior control.


AI Magazine ◽  
2015 ◽  
Vol 36 (3) ◽  
pp. 107-112
Author(s):  
Adam B. Cohen ◽  
Sonia Chernova ◽  
James Giordano ◽  
Frank Guerin ◽  
Kris Hauser ◽  
...  

The AAAI 2014 Fall Symposium Series was held Thursday through Saturday, November 13–15, at the Westin Arlington Gateway in Arlington, Virginia adjacent to Washington, DC. The titles of the seven symposia were Artificial Intelligence for Human-Robot Interaction, Energy Market Prediction, Expanding the Boundaries of Health Informatics Using AI, Knowledge, Skill, and Behavior Transfer in Autonomous Robots, Modeling Changing Perspectives: Reconceptualizing Sensorimotor Experiences, Natural Language Access to Big Data, and The Nature of Humans and Machines: A Multidisciplinary Discourse. The highlights of each symposium are presented in this report.


Sensors ◽  
2020 ◽  
Vol 20 (1) ◽  
pp. 296 ◽  
Author(s):  
Caroline P. C. Chanel ◽  
Raphaëlle N. Roy ◽  
Frédéric Dehais ◽  
Nicolas Drougard

The design of human–robot interactions is a key challenge to optimize operational performance. A promising approach is to consider mixed-initiative interactions in which the tasks and authority of each human and artificial agents are dynamically defined according to their current abilities. An important issue for the implementation of mixed-initiative systems is to monitor human performance to dynamically drive task allocation between human and artificial agents (i.e., robots). We, therefore, designed an experimental scenario involving missions whereby participants had to cooperate with a robot to fight fires while facing hazards. Two levels of robot automation (manual vs. autonomous) were randomly manipulated to assess their impact on the participants’ performance across missions. Cardiac activity, eye-tracking, and participants’ actions on the user interface were collected. The participants performed differently to an extent that we could identify high and low score mission groups that also exhibited different behavioral, cardiac and ocular patterns. More specifically, our findings indicated that the higher level of automation could be beneficial to low-scoring participants but detrimental to high-scoring ones, and vice versa. In addition, inter-subject single-trial classification results showed that the studied behavioral and physiological features were relevant to predict mission performance. The highest average balanced accuracy (74%) was reached using the features extracted from all input devices. These results suggest that an adaptive HRI driving system, that would aim at maximizing performance, would be capable of analyzing such physiological and behavior markers online to further change the level of automation when it is relevant for the mission purpose.


Author(s):  
Shan G. Lakhmani ◽  
Julia L. Wright ◽  
Michael R. Schwartz ◽  
Daniel Barber

Human-robot interaction requires communication, however what form this communication should take to facilitate effective team performance is still undetermined. One notion is that effective human-agent communications can be achieved by combining transparent information-sharing techniques with specific communication patterns. This study examines how transparency and a robot’s communication patterns interact to affect human performance in a human-robot teaming task. Participants’ performance in a target identification task was affected by the robot’s communication pattern. Participants missed identifying more targets when they worked with a bidirectionally communicating robot than when they were working with a unidirectionally communicating one. Furthermore, working with a bidirectionally communicating robot led to fewer correct identifications than working with a unidirectionally communicating robot, but only when the robot provided less transparency information. The implications these findings have for future robot interface designs are discussed.


Author(s):  
Louise LePage

AbstractStage plays, theories of theatre, narrative studies, and robotics research can serve to identify, explore, and interrogate theatrical elements that support the effective performance of sociable humanoid robots. Theatre, including its parts of performance, aesthetics, character, and genre, can also reveal features of human–robot interaction key to creating humanoid robots that are likeable rather than uncanny. In particular, this can be achieved by relating Mori's (1970/2012) concept of total appearance to realism. Realism is broader and more subtle in its workings than is generally recognised in its operationalization in studies that focus solely on appearance. For example, it is complicated by genre. A realistic character cast in a detective drama will convey different qualities and expectations than the same character in a dystopian drama or romantic comedy. The implications of realism and genre carry over into real life. As stage performances and robotics studies reveal, likeability depends on creating aesthetically coherent representations of character, where all the parts coalesce to produce a socially identifiable figure demonstrating predictable behaviour.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-16
Author(s):  
Maurice Lamb ◽  
Patrick Nalepka ◽  
Rachel W. Kallen ◽  
Tamara Lorenz ◽  
Steven J. Harrison ◽  
...  

Interactive or collaborative pick-and-place tasks occur during all kinds of daily activities, for example, when two or more individuals pass plates, glasses, and utensils back and forth between each other when setting a dinner table or loading a dishwasher together. In the near future, participation in these collaborative pick-and-place tasks could also include robotic assistants. However, for human-machine and human-robot interactions, interactive pick-and-place tasks present a unique set of challenges. A key challenge is that high-level task-representational algorithms and preplanned action or motor programs quickly become intractable, even for simple interaction scenarios. Here we address this challenge by introducing a bioinspired behavioral dynamic model of free-flowing cooperative pick-and-place behaviors based on low-dimensional dynamical movement primitives and nonlinear action selection functions. Further, we demonstrate that this model can be successfully implemented as an artificial agent control architecture to produce effective and robust human-like behavior during human-agent interactions. Participants were unable to explicitly detect whether they were working with an artificial (model controlled) agent or another human-coactor, further illustrating the potential effectiveness of the proposed modeling approach for developing systems of robust real/embodied human-robot interaction more generally.


2011 ◽  
Vol 5 (1) ◽  
pp. 83-105 ◽  
Author(s):  
Jessie Y. C. Chen

A military vehicle crew station environment was simulated and a series of three experiments was conducted to examine the workload and performance of the combined position of the gunner and robotics operator in a multitasking environment. The study also evaluated whether aided target recognition (AiTR) capabilities (delivered through tactile and/or visual cuing) for the gunnery task might benefit the concurrent robotics and communication tasks and how the concurrent task performance might be affected when the AiTR was unreliable (i.e., false alarm prone or miss prone). Participants’ spatial ability was consistently found to be a reliable predictor of their targeting task performance as well as their modality preference for the AiTR display. Participants’ attentional control was found to significantly affect the way they interacted with unreliable automated systems.


2020 ◽  
Vol 9 (3) ◽  
pp. 1220-1228
Author(s):  
Muhammad Attamimi ◽  
Takashi Omori

One of the biggest challenges in human-agent interaction (HAI) is the development of an agent such as a robot that can understand its partner (a human) and interact naturally. To realize this, a system (agent) should be able to observe a human well and estimate his/her mental state. Towards this goal, in this paper, we present a method of estimating a child's attention, one of the more important human mental states, in a free-play scenario of child-robot interaction (CRI). To realize attention estimation in such CRI scenario, first, we developed a system that could sense a child's verbal and non-verbal multimodal signals such as gaze, facial expression, proximity, and so on. Then, the observed information was used to train a model that is based on a Support Vector Machine (SVM) to estimate a human's attention level. We investigated the accuracy of the proposed method by comparing with a human judge's estimation, and obtained some promising results which we discuss here.


2021 ◽  
Vol 3 ◽  
Author(s):  
Beatrice Biancardi ◽  
Soumia Dermouche ◽  
Catherine Pelachaud

Adaptation is a key mechanism in human–human interaction. In our work, we aim at endowing embodied conversational agents with the ability to adapt their behavior when interacting with a human interlocutor. With the goal to better understand what the main challenges concerning adaptive agents are, we investigated the effects on the user’s experience of three adaptation models for a virtual agent. The adaptation mechanisms performed by the agent take into account the user’s reaction and learn how to adapt on the fly during the interaction. The agent’s adaptation is realized at several levels (i.e., at the behavioral, conversational, and signal levels) and focuses on improving the user’s experience along different dimensions (i.e., the user’s impressions and engagement). In our first two studies, we aim to learn the agent’s multimodal behaviors and conversational strategies to dynamically optimize the user’s engagement and impressions of the agent, by taking them as input during the learning process. In our third study, our model takes both the user’s and the agent’s past behavior as input and predicts the agent’s next behavior. Our adaptation models have been evaluated through experimental studies sharing the same interacting scenario, with the agent playing the role of a virtual museum guide. These studies showed the impact of the adaptation mechanisms on the user’s experience of the interaction and their perception of the agent. Interacting with an adaptive agent vs. a nonadaptive agent tended to be more positively perceived. Finally, the effects of people’s a priori about virtual agents found in our studies highlight the importance of taking into account the user’s expectancies in human–agent interaction.


Author(s):  
Tracy Sanders ◽  
Alexandra Kaplan ◽  
Ryan Koch ◽  
Michael Schwartz ◽  
P. A. Hancock

Objective: To understand the influence of trust on use choice in human-robot interaction via experimental investigation. Background: The general assumption that trusting a robot leads to using that robot has been previously identified, often by asking participants to choose between manually completing a task or using an automated aid. Our work further evaluates the relationship between trust and use choice and examines factors impacting choice. Method: An experiment was conducted wherein participants rated a robot on a trust scale, then made decisions about whether to use that robotic agent or a human agent to complete a task. Participants provided explicit reasoning for their choices. Results: While we found statistical support for the “trust leads to use” relationship, qualitative results indicate other factors are important as well. Conclusion: Results indicated that while trust leads to use, use is also heavily influenced by the specific task at hand. Users more often chose a robot for a dangerous task where loss of life is likely, citing safety as their primary concern. Conversely, users chose humans for the mundane warehouse task, mainly citing financial reasons, specifically fear of job and income loss for the human worker. Application: Understanding the factors driving use choice is key to appropriate interaction in the field of human-robot teaming.


Sign in / Sign up

Export Citation Format

Share Document