scholarly journals Intentional mindset towards robots – open questions and methodological challenges

2019 ◽  
Author(s):  
Elef Schellen ◽  
Agnieszka Wykowska

Natural and effective interaction with humanoid robots should involve social cognitive mechanisms of the human brain that normally facilitate social interaction between humans. Recent research has indicated that the presence and efficiency of these mechanisms in human-robot interaction (HRI) might be contingent on the adoption of a set of attitudes, mindsets and beliefs concerning the robot’s inner machinery. Current research is investigating the factors that influence these mindsets, and how they affect HRI. This review focuses on a specific mindset, namely the “intentional mindset” in which intentionality is attributed to another agent. More specifically, we focus on the concept of adopting the intentional stance towards robots, i.e., the tendency to predict and explain the robots’ behavior with reference to mental states. We discuss the relationship between adoption of intentional stance and lower-level mechanisms of social cognition, and we provide a critical evaluation of research methods currently employed in this field, highlighting common pitfalls in the measurement of attitudes and mindsets.

2021 ◽  
Author(s):  
Serena Marchesi ◽  
Nicolas Spatola ◽  
Agnieszka Wykowska

Evidence from cognitive psychology showed that cultural differences influence human social cognition, leading to a different activation of social cognitive mechanisms. A growing corpus of literature in Human-Robot Interaction is investigating how culture shapes cognitive processes like anthropomorphism or mind attribution when humans face artificial agents, such as robots. The present paper aims at disentangling the relationship between cultural values, anthropomorphism, and intentionality attribution to robots, in the context of the intentional stance theory. We administered a battery of tests to 600 participants from various nations worldwide and modeled our data with a path model. Results showed a consistent direct influence of collectivism on anthropomorphism but not on the adoption of the intentional stance. Therefore, we further explored this result with a mediation analysis that revealed anthropomorphism as a true mediator between collectivism and the adoption of the intentional stance. We conclude that our findings extend previous literature by showing that the adoption of the intentional stance towards humanoid robots depends on anthropomorphic attribution in the context of cultural values.


2021 ◽  
Author(s):  
Ziggy O'Reilly ◽  
Davide Ghiglino ◽  
Nicolas Spatola ◽  
Agnieszka Wykowska

To enhance collaboration between humans and robots it might be important to trigger towards humanoid robots, similar social cognitive mechanisms that are triggered towards humans, such as the adoption of the intentional stance (i.e., explaining an agents behavior with reference to mental states). This study aimed (1) to measure whether a film modulates participants’ tendency to adopt the intentional stance toward a humanoid robot and; (2) to investigate whether autistic traits affects this adoption. We administered two subscales of the InStance Test (IST) (i.e. ‘isolated robot’ subscale and ‘social robot’ subscale) before and after participants watched a film depicting an interaction between a humanoid robot and a human. On the isolated robot subscale, individuals with low autistic traits were more likely to adopt the intentional stance towards a humanoid robot after they watched the film, but there was no effect on individuals with high autistic traits.On the social robot subscale (i.e.when the robot is interactingwith a human) both individuals with low and high autistic traits decreased in their adoption of the intentional stance after they watched the film. This suggests that the content of the narrative and an individual’s social cognitive abilities, affects the degree to which the intentional stance towards a humanoid robot is adopted.


2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Agnieszka Wykowska

In our daily lives, we need to predict and understand others’ behaviour in order to navigate through our social environment. Predictions concerning other humans’ behaviour usually refer to their mental states, such as beliefs or intentions. Such a predictive strategy is called adoption of the intentional stance. In this paper, we review literature related to the concept of intentional stance from the perspectives of philosophy, psychology, human development, culture and human-robot interaction. We propose that adopting the intentional stance might be a central factor in facilitating social attunement with artificial agents. The paper first reviews the theoretical considerations regarding the intentional stance, and examines literature related to the development of intentional stance across the life span. Subsequently, it discusses cultural norms as grounded in the intentional stance and finally, it focuses on the issue of adopting the intentional stance towards artificial agents, such as humanoid robots. At the dawn of the artificial intelligence era, the question of how (and when) we predict and explain robots’ behaviour by referring to mental states is of high interest. The paper concludes with the discussion of the ethical consequences of robots towards which we adopt the intentional stance, and sketches future directions in research on this topic.


2018 ◽  
Vol 226 (2) ◽  
pp. 98-109 ◽  
Author(s):  
Antonella Marchetti ◽  
Federico Manzi ◽  
Shoji Itakura ◽  
Davide Massaro

Abstract. This review focuses on some relevant issues concerning the relationship between theory of mind (ToM) and humanoid robots. Humanoid robots are employed in different everyday-life contexts, so it seems relevant to question whether the relationships between human beings and humanoids can be characterized by a mode of interaction typical of the relationships between human beings, that is, the attribution of mental states. Because ToM development continuously undergoes changes from early childhood to late adulthood, we adopted a lifespan perspective. We analyzed contributions from the literature by organizing them around the partition between “mental states and actions” and “human-like features.” Finally, we considered how studying human–robot interaction, within a ToM context, can contribute to our understanding of the intersubjective nature of this interaction.


2020 ◽  
Author(s):  
Agnieszka Wykowska ◽  
Jairo Pérez-Osorio ◽  
Stefan Kopp

This booklet is a collection of the position statements accepted for the HRI’20 conference workshop “Social Cognition for HRI: Exploring the relationship between mindreading and social attunement in human-robot interaction” (Wykowska, Perez-Osorio & Kopp, 2020). Unfortunately, due to the rapid unfolding of the novel coronavirus at the beginning of the present year, the conference and consequently our workshop, were canceled. On the light of these events, we decided to put together the positions statements accepted for the workshop. The contributions collected in these pages highlight the role of attribution of mental states to artificial agents in human-robot interaction, and precisely the quality and presence of social attunement mechanisms that are known to make human interaction smooth, efficient, and robust. These papers also accentuate the importance of the multidisciplinary approach to advance the understanding of the factors and the consequences of social interactions with artificial agents.


Leonardo ◽  
2021 ◽  
pp. 1-13
Author(s):  
Ziggy O’Reilly ◽  
David Silvera-Tawil ◽  
Ionat Zurr ◽  
Diana Tan

Abstract Theory of Mind (ToM) —a social cognitive ability commonly under-developed in autistic individuals— is necessary to attribute mental states to oneself and others. Research into robot-assisted interventions to improve ToM ability in autistic children has become increasingly popular. However, no appropriate task currently exists to measure the degree of efficacy of robot-assisted interventions targeting ToM ability. In this paper, the authors demonstrate how animation techniques and principles can be leveraged to develop and produce videos of humanoid robots interacting, which could selectively measure ToM.


2016 ◽  
Vol 371 (1693) ◽  
pp. 20150375 ◽  
Author(s):  
Agnieszka Wykowska ◽  
Thierry Chaminade ◽  
Gordon Cheng

In this paper, we propose that experimental protocols involving artificial agents, in particular the embodied humanoid robots, provide insightful information regarding social cognitive mechanisms in the human brain. Using artificial agents allows for manipulation and control of various parameters of behaviour, appearance and expressiveness in one of the interaction partners (the artificial agent), and for examining effect of these parameters on the other interaction partner (the human). At the same time, using artificial agents means introducing the presence of artificial, yet human-like, systems into the human social sphere. This allows for testing in a controlled, but ecologically valid, manner human fundamental mechanisms of social cognition both at the behavioural and at the neural level. This paper will review existing literature that reports studies in which artificial embodied agents have been used to study social cognition and will address the question of whether various mechanisms of social cognition (ranging from lower- to higher-order cognitive processes) are evoked by artificial agents to the same extent as by natural agents, humans in particular. Increasing the understanding of how behavioural and neural mechanisms of social cognition respond to artificial anthropomorphic agents provides empirical answers to the conundrum ‘What is a social agent?’


2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Serena Marchesi ◽  
Davide Ghiglino ◽  
Melis Ince ◽  
Agnieszka Wykowska

Expectations about other’s behavior based on mental states modulate the way we interact with people. On the brink of the introduction of robots in our social environment, the question of whether humans would use the same strategy when interacting with artificial agents gain relevance. Recent research shows that people can adopt the mentalistic statement to explain the behavior of humanoid robots [1]. Adopting such a strategy might be mediated by the expectations that people have about robots and technology, among others. The present study aims to create a questionnaire to evaluate such expectations and to test whether these priors in fact modulate the adoption of the intentional stance. We found that people’s expectations directed to a particular robot platform have an influence on the adoption of mental state based explanations regarding an artificial agent. Lower expectations were associated with anxiety during interaction with robots and neuroticism. Meanwhile, high expectations are linked to feeling less discomfort when interacting with robots and a higher degree of openness. Our findings suggest that platform-directed expectations might also play a crucial role in HRI and in the adoption of intentional stance toward artificial agents.


2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Agnieszka Wykowska

On the day by day humans need to predict and understand others’ behavior in order to efficiently navigate through our social environment. When making predictions about what others are going to do next, we refer to their mental states, such as beliefs or intentions. At the dawn of a new era, in which robots will be among us at homes and offices, one needs to ask whether (or when) we predict and also explain robots’ behavior with reference to mental states. In other words, do we adopt the intentional stance (Dennett in The Intentional Stance. MIT Press, Cambridge (1987) [1]) also towards artificial agents—especially those with humanoid shape and human-like behavior? What plays a role in adopting the intentional stance towards robots? Does adopting an intentional stance affect our social attunement with artificial agents? In this chapter, we first discuss the general approach that we take towards examining these questions—using objective methods of cognitive neuroscience to test social attunement as a function of adopting the intentional stance. Also, we describe our newly developed method to examine whether participants adopt the intentional stance towards an artificial agent. The chapter concludes with an outlook to the questions that still need to be addressed, like ethical consequences and societal impact of robots with which we attune socially, and towards which we adopt the intentional stance.


2011 ◽  
Vol 26 (2) ◽  
pp. 159-176 ◽  
Author(s):  
Catherine J. Lutz-Zois ◽  
Carolyn E. Roecker Phelps ◽  
Adam C. Reichle

Using a sample of 1,117 female college students, this study examined emotional, behavioral, and social-cognitive mechanisms of sexual abuse revictimization. It was hypothesized that numbing, alexithymia, alcohol problems, mistrust, and adult attachment dimensions would mediate the relationship between childhood sexual abuse (CSA) and adult sexual abuse (ASA). Aside from the close adult attachment dimension, the results indicated that all of the hypothesized mediators were associated with CSA. However, only alcohol problems and mistrust met the necessary conditions of mediation. The results with respect to mistrust are especially unique in that it is one of the first empirical demonstrations of a social-cognitive mechanism for sexual abuse revictimization. Thus, these results enhance our understanding of interpersonal mediators of the relationship between CSA and ASA and provide a new direction for future research.


Sign in / Sign up

Export Citation Format

Share Document