scholarly journals The Instance Task: how to measure the mentalistic bias in human-robot interaction

2021 ◽  
Author(s):  
Nicolas Spatola ◽  
Serena Marchesi ◽  
Agnieszka Wykowska

In human-robot interactions, people tend to attribute them mental states such as intentionality to make sense of their behaviour:the intentional stance. These inferences deeply influence how one will consider, engage and behave towards robots. However,people highly differ in their likelihood to adopt this intentional stance. Therefore it seems crucial to assess these interindividualdifferences to better evaluate and understand human-robot interactions. In two studies we developed and validated the structureof a task aiming at evaluating to what extent people adopt the intentional stance toward robots. The method consists in a taskthat probes participants’ stance by requiring them to choose the likelihood of an explanation (mentalistic vs. mechanistic) ofbehaviour of a robot depicted in a naturalistic scenario. Results showed a reliable psychometric structure of the present task toevaluate the mentalistic bias of participants as a proxy of the intentional stance. We further discuss the importance of consideringthese interindividual differences in human-robot interactions studies and social robotics

2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Agnieszka Wykowska

In our daily lives, we need to predict and understand others’ behaviour in order to navigate through our social environment. Predictions concerning other humans’ behaviour usually refer to their mental states, such as beliefs or intentions. Such a predictive strategy is called adoption of the intentional stance. In this paper, we review literature related to the concept of intentional stance from the perspectives of philosophy, psychology, human development, culture and human-robot interaction. We propose that adopting the intentional stance might be a central factor in facilitating social attunement with artificial agents. The paper first reviews the theoretical considerations regarding the intentional stance, and examines literature related to the development of intentional stance across the life span. Subsequently, it discusses cultural norms as grounded in the intentional stance and finally, it focuses on the issue of adopting the intentional stance towards artificial agents, such as humanoid robots. At the dawn of the artificial intelligence era, the question of how (and when) we predict and explain robots’ behaviour by referring to mental states is of high interest. The paper concludes with the discussion of the ethical consequences of robots towards which we adopt the intentional stance, and sketches future directions in research on this topic.


2020 ◽  
Author(s):  
Agnieszka Wykowska ◽  
Jairo Pérez-Osorio ◽  
Stefan Kopp

This booklet is a collection of the position statements accepted for the HRI’20 conference workshop “Social Cognition for HRI: Exploring the relationship between mindreading and social attunement in human-robot interaction” (Wykowska, Perez-Osorio & Kopp, 2020). Unfortunately, due to the rapid unfolding of the novel coronavirus at the beginning of the present year, the conference and consequently our workshop, were canceled. On the light of these events, we decided to put together the positions statements accepted for the workshop. The contributions collected in these pages highlight the role of attribution of mental states to artificial agents in human-robot interaction, and precisely the quality and presence of social attunement mechanisms that are known to make human interaction smooth, efficient, and robust. These papers also accentuate the importance of the multidisciplinary approach to advance the understanding of the factors and the consequences of social interactions with artificial agents.


2019 ◽  
Author(s):  
Elef Schellen ◽  
Agnieszka Wykowska

Natural and effective interaction with humanoid robots should involve social cognitive mechanisms of the human brain that normally facilitate social interaction between humans. Recent research has indicated that the presence and efficiency of these mechanisms in human-robot interaction (HRI) might be contingent on the adoption of a set of attitudes, mindsets and beliefs concerning the robot’s inner machinery. Current research is investigating the factors that influence these mindsets, and how they affect HRI. This review focuses on a specific mindset, namely the “intentional mindset” in which intentionality is attributed to another agent. More specifically, we focus on the concept of adopting the intentional stance towards robots, i.e., the tendency to predict and explain the robots’ behavior with reference to mental states. We discuss the relationship between adoption of intentional stance and lower-level mechanisms of social cognition, and we provide a critical evaluation of research methods currently employed in this field, highlighting common pitfalls in the measurement of attitudes and mindsets.


2021 ◽  
Vol 8 ◽  
Author(s):  
Nicolas Spatola ◽  
Serena Marchesi ◽  
Agnieszka Wykowska

In human-robot interactions, people tend to attribute to robots mental states such as intentions or desires, in order to make sense of their behaviour. This cognitive strategy is termed “intentional stance”. Adopting the intentional stance influences how one will consider, engage and behave towards robots. However, people differ in their likelihood to adopt intentional stance towards robots. Therefore, it seems crucial to assess these interindividual differences. In two studies we developed and validated the structure of a task aiming at evaluating to what extent people adopt intentional stance towards robot actions, the Intentional Stance task (IST). The Intentional Stance Task consists in a task that probes participants’ stance by requiring them to choose the plausibility of a description (mentalistic vs. mechanistic) of behaviour of a robot depicted in a scenario composed of three photographs. Results showed a reliable psychometric structure of the IST. This paper therefore concludes with the proposal of using the IST as a proxy for assessing the degree of adoption of the intentional stance towards robots.


Electronics ◽  
2020 ◽  
Vol 9 (2) ◽  
pp. 267
Author(s):  
Fernando Alonso Martin ◽  
María Malfaz ◽  
Álvaro Castro-González ◽  
José Carlos Castillo ◽  
Miguel Ángel Salichs

The success of social robotics is directly linked to their ability of interacting with people. Humans possess verbal and non-verbal communication skills, and, therefore, both are essential for social robots to get a natural human–robot interaction. This work focuses on the first of them since the majority of social robots implement an interaction system endowed with verbal capacities. In order to do this implementation, we must equip social robots with an artificial voice system. In robotics, a Text to Speech (TTS) system is the most common speech synthesizer technique. The performance of a speech synthesizer is mainly evaluated by its similarity to the human voice in relation to its intelligibility and expressiveness. In this paper, we present a comparative study of eight off-the-shelf TTS systems used in social robots. In order to carry out the study, 125 participants evaluated the performance of the following TTS systems: Google, Microsoft, Ivona, Loquendo, Espeak, Pico, AT&T, and Nuance. The evaluation was performed after observing videos where a social robot communicates verbally using one TTS system. The participants completed a questionnaire to rate each TTS system in relation to four features: intelligibility, expressiveness, artificiality, and suitability. In this study, four research questions were posed to determine whether it is possible to present a ranking of TTS systems in relation to each evaluated feature, or, on the contrary, there are no significant differences between them. Our study shows that participants found differences between the TTS systems evaluated in terms of intelligibility, expressiveness, and artificiality. The experiments also indicated that there was a relationship between the physical appearance of the robots (embodiment) and the suitability of TTS systems.


Author(s):  
Peter Remmers

Effects of anthropomorphism or zoomorphism in social robotics motivate two opposing tendencies in the philosophy and ethics of robots: a ‘rational’ tendency that discourages excessive anthropomorphism because it is based on an illusion and a ‘visionary’ tendency that promotes the relational reality of human-robot interaction. I argue for two claims: First, the opposition between these tendencies cannot be resolved and leads to a kind of technological antinomy. Second, we can deal with this antinomy by way of an analogy between our treatment of robots as social interactors and the perception of objects in pictures according to a phenomenological theory of image perception. Following this analogy, human- or animal-likeness in social robots is interpreted neither as a psychological illusion, nor as a relational reality. Instead, robots belong to a special ontological category shaped by perception and interaction, similar to objects in images.


Author(s):  
Andrew Best ◽  
Samantha F. Warta ◽  
Katelynn A. Kapalo ◽  
Stephen M. Fiore

Using research in social cognition as a foundation, we studied rapid versus reflective mental state attributions and the degree to which machine learning classifiers can be trained to make such judgments. We observed differences in response times between conditions, but did not find significant differences in the accuracy of mental state attributions. We additionally demonstrate how to train machine classifiers to identify mental states. We discuss advantages of using an interdisciplinary approach to understand and improve human-robot interaction and to further the development of social cognition in artificial intelligence.


Author(s):  
Joanna K. Malinowska

AbstractGiven that empathy allows people to form and maintain satisfying social relationships with other subjects, it is no surprise that this is one of the most studied phenomena in the area of human–robot interaction (HRI). But the fact that the term ‘empathy’ has strong social connotations raises a question: can it be applied to robots? Can we actually use social terms and explanations in relation to these inanimate machines? In this article, I analyse the range of uses of the term empathy in the field of HRI studies and social robotics, and consider the substantial, functional and relational positions on this issue. I focus on the relational (cooperational) perspective presented by Luisa Damiano and Paul Dumouchel, who interpret emotions (together with empathy) as being the result of affective coordination. I also reflect on the criteria that should be used to determine when, in such relations, we are dealing with actual empathy.


2017 ◽  
Author(s):  
Takayuki Kanda ◽  
Hiroshi Ishiguro

Sign in / Sign up

Export Citation Format

Share Document