scholarly journals Exposure to robotic virtual agent affects adoption of intentional stance

2021 ◽  
Author(s):  
Lorenzo Parenti ◽  
Serena Marchesi ◽  
Marwen Belkaid ◽  
Agnieszka Wykowska

Understanding how and when humans attribute intentionality to artificial agents is a key issue in contemporary human and technological sciences. This paper addresses the question of whether adopting intentional stance can be modulated by exposure to a 3D animated robot character, and whether this depends on the human-likeness of the character's behavior. We report three experiments investigating how appearance and behavioral features of a virtual character affect humans’ attribution of intentionality toward artificial social agents. The results show that adoption of intentional stance can be modulated depending on participants' expectations about the agent. This study brings attention to specific features of virtual agents and insights for further work in the field of virtual interaction.

Author(s):  
Guglielmo Papagni ◽  
Sabine Koeszegi

AbstractArtificial agents are progressively becoming more present in everyday-life situations and more sophisticated in their interaction affordances. In some specific cases, like Google Duplex, GPT-3 bots or Deep Mind’s AlphaGo Zero, their capabilities reach or exceed human levels. The use contexts of everyday life necessitate making such agents understandable by laypeople. At the same time, displaying human levels of social behavior has kindled the debate over the adoption of Dennett’s ‘intentional stance’. By means of a comparative analysis of the literature on robots and virtual agents, we defend the thesis that approaching these artificial agents ‘as if’ they had intentions and forms of social, goal-oriented rationality is the only way to deal with their complexity on a daily base. Specifically, we claim that this is the only viable strategy for non-expert users to understand, predict and perhaps learn from artificial agents’ behavior in everyday social contexts. Furthermore, we argue that as long as agents are transparent about their design principles and functionality, attributing intentions to their actions is not only essential, but also ethical. Additionally, we propose design guidelines inspired by the debate over the adoption of the intentional stance.


2021 ◽  
Author(s):  
Serena Marchesi ◽  
Davide De Tommaso ◽  
Jairo Pérez-Osorio ◽  
Agnieszka Wykowska

Humans interpret and predict others’ behaviors by ascribing them intentions or beliefs, or in other words, by adopting the intentional stance. Since artificial agents are increasingly populating our daily environments, the question arises whether (and under which conditions) humans would apply the “human-model” to understand the behaviors of these new social agents. Thus, in a series of three experiments we tested whether embedding humans in a social interaction with a humanoid robot either displaying a human-like or machine-like behavior, would modulate their initial bias towards adopting the intentional stance. Results showed that indeed humans are more prone to adopt the intentional stance after having interacted with a more socially available and human-like robot, while no modulation of the adoption of the intentional stance emerged towards a mechanistic robot. We conclude that short experiences with humanoid robots presumably inducing a “like-me” impression and social bonding increase the likelihood of adopting the intentional stance.


Decision ◽  
2016 ◽  
Vol 3 (1) ◽  
pp. 40-53 ◽  
Author(s):  
Peter C. Pantelis ◽  
Timothy Gerstner ◽  
Kevin Sanik ◽  
Ari Weinstein ◽  
Steven A. Cholewiak ◽  
...  

2014 ◽  
Vol 23 (04) ◽  
pp. 1460020 ◽  
Author(s):  
George Anastassakis ◽  
Themis Panayiotopoulos

Intelligent virtual agent behaviour is a crucial element of any virtual environment application as it essentially brings the environment to life, introduces believability and realism and enables complex interactions and evolution over time. However, the development of mechanisms for virtual agent perception and action is neither a trivial nor a straight-forward task. In this paper we present a model of perception and action for intelligent virtual agents that meets specific requirements and can as such be systematically implemented, can seamlessly and transparently integrate with knowledge representation and intelligent reasoning mechanisms, is highly independent of virtual world implementation specifics, and enables virtual agent portability and reuse.


Author(s):  
Iskander Umarov ◽  
Maxim Mozgovoy

The rapid development of complex virtual worlds (most notably, in 3D computer and video games) introduces new challenges for the creation of virtual agents, controlled by artificial intelligence (AI) systems. Two important subproblems in this topic area which need to be addressed are (a) believability and (b) effectiveness of agents’ behavior, i.e., human-likeness of the characters and high ability to achieving their own goals. In this paper, the authors study current approaches to believability and effectiveness of AI behavior in virtual worlds. They examine the concepts of believability and effectiveness, and analyze several successful attempts to address these challenges.


Author(s):  
Wan Ching Ho ◽  
Kerstin Dautenhahn ◽  
Meiyii Lim ◽  
Sibylle Enz ◽  
Carsten Zoll ◽  
...  

This article presents research towards the development of a virtual learning environment (VLE) inhabited by intelligent virtual agents (IVAs) and modelling a scenario of inter-cultural interactions. The ultimate aim of this VLE is to allow users to reflect upon and learn about intercultural communication and collaboration. Rather than predefining the interactions among the virtual agents and scripting the possible interactions afforded by this environment, we pursue a bottom-up approach whereby inter-cultural communication emerges from interactions with and among autonomous agents and the user(s). The intelligent virtual agents that are inhabiting this environment are expected to be able to broaden their knowledge about the world and other agents, which may be of different cultural backgrounds, through interactions. This work is part of a collaborative effort within a European research project called eCIRCUS. Specifically, this article focuses on our continuing research concerned with emotional knowledge learning in autobiographic social agents.


2021 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Eva Wiese ◽  
Agnieszka Wykowska

The present chapter provides an overview from the perspective of social cognitive neuroscience (SCN) regarding theory of mind (ToM) and joint attention (JA) as crucial mechanisms of social cognition and discusses how these mechanisms have been investigated in social interaction with artificial agents. In the final sections, the chapter reviews computational models of ToM and JA in social robots (SRs) and intelligent virtual agents (IVAs) and discusses the current challenges and future directions.


2010 ◽  
pp. 602-621
Author(s):  
Wan Ching Ho ◽  
Kerstin Dautenhahn ◽  
Meiyii Lim ◽  
Sibylle Enz ◽  
Carsten Zoll ◽  
...  

This article presents research towards the development of a virtual learning environment (VLE) inhabited by intelligent virtual agents (IVAs) and modelling a scenario of inter-cultural interactions. The ultimate aim of this VLE is to allow users to reflect upon and learn about intercultural communication and collaboration. Rather than predefining the interactions among the virtual agents and scripting the possible interactions afforded by this environment, we pursue a bottom-up approach whereby inter-cultural communication emerges from interactions with and among autonomous agents and the user(s). The intelligent virtual agents that are inhabiting this environment are expected to be able to broaden their knowledge about the world and other agents, which may be of different cultural backgrounds, through interactions. This work is part of a collaborative effort within a European research project called eCIRCUS. Specifically, this article focuses on our continuing research concerned with emotional knowledge learning in autobiographic social agents.


Author(s):  
Boyoung Kim ◽  
Elizabeth Phillips

Robots are entering various domains of human societies, potentially unfolding more opportunities for people to perceive robots as social agents. We expect that having robots in proximity would create unique social learning situations where humans spontaneously observe and imitate robots’ behaviors. At times, these occurrences of humans’ imitating robot behaviors may result in a spread of unsafe or unethical behaviors among humans. For responsible robot designing, therefore, we argue that it is essential to understand physical and psychological triggers of social learning in robot design. Grounded in the existing literature of social learning and the uncanny valley theories, we discuss the human-likeness of robot appearance and affective responses associated with robot appearance as likely factors that either facilitate or deter social learning. We propose practical considerations for social learning and robot design.


2020 ◽  
Vol 10 (16) ◽  
pp. 5636
Author(s):  
Wafaa Alsaggaf ◽  
Georgios Tsaramirsis ◽  
Norah Al-Malki ◽  
Fazal Qudus Khan ◽  
Miadah Almasry ◽  
...  

Computer-controlled virtual characters are essential parts of most virtual environments and especially computer games. Interaction between these virtual agents and human players has a direct impact on the believability of and immersion in the application. The facial animations of these characters are a key part of these interactions. The player expects the elements of the virtual world to act in a similar manner to the real world. For example, in a board game, if the human player wins, he/she would expect the computer-controlled character to be sad. However, the reactions, more specifically, the facial expressions of virtual characters in most games are not linked with the game events. Instead, they have pre-programmed or random behaviors without any understanding of what is really happening in the game. In this paper, we propose a virtual character facial expression probabilistic decision model that will determine when various facial animations should be played. The model was developed by studying the facial expressions of human players while playing a computer videogame that was also developed as part of this research. The model is represented in the form of trees with 15 extracted game events as roots and 10 associated animations of facial expressions with their corresponding probability of occurrence. Results indicated that only 1 out of 15 game events had a probability of producing an unexpected facial expression. It was found that the “win, lose, tie” game events have more dominant associations with the facial expressions than the rest of game events, followed by “surprise” game events that occurred rarely, and finally, the “damage dealing” events.


Sign in / Sign up

Export Citation Format

Share Document