scholarly journals Modulating the Intentional Stance: Humanoid Robots, Narrative and Autistic Traits

2021 ◽  
Author(s):  
Ziggy O'Reilly ◽  
Davide Ghiglino ◽  
Nicolas Spatola ◽  
Agnieszka Wykowska

To enhance collaboration between humans and robots it might be important to trigger towards humanoid robots, similar social cognitive mechanisms that are triggered towards humans, such as the adoption of the intentional stance (i.e., explaining an agents behavior with reference to mental states). This study aimed (1) to measure whether a film modulates participants’ tendency to adopt the intentional stance toward a humanoid robot and; (2) to investigate whether autistic traits affects this adoption. We administered two subscales of the InStance Test (IST) (i.e. ‘isolated robot’ subscale and ‘social robot’ subscale) before and after participants watched a film depicting an interaction between a humanoid robot and a human. On the isolated robot subscale, individuals with low autistic traits were more likely to adopt the intentional stance towards a humanoid robot after they watched the film, but there was no effect on individuals with high autistic traits.On the social robot subscale (i.e.when the robot is interactingwith a human) both individuals with low and high autistic traits decreased in their adoption of the intentional stance after they watched the film. This suggests that the content of the narrative and an individual’s social cognitive abilities, affects the degree to which the intentional stance towards a humanoid robot is adopted.

2019 ◽  
Author(s):  
Elef Schellen ◽  
Agnieszka Wykowska

Natural and effective interaction with humanoid robots should involve social cognitive mechanisms of the human brain that normally facilitate social interaction between humans. Recent research has indicated that the presence and efficiency of these mechanisms in human-robot interaction (HRI) might be contingent on the adoption of a set of attitudes, mindsets and beliefs concerning the robot’s inner machinery. Current research is investigating the factors that influence these mindsets, and how they affect HRI. This review focuses on a specific mindset, namely the “intentional mindset” in which intentionality is attributed to another agent. More specifically, we focus on the concept of adopting the intentional stance towards robots, i.e., the tendency to predict and explain the robots’ behavior with reference to mental states. We discuss the relationship between adoption of intentional stance and lower-level mechanisms of social cognition, and we provide a critical evaluation of research methods currently employed in this field, highlighting common pitfalls in the measurement of attitudes and mindsets.


2021 ◽  
Author(s):  
Serena Marchesi ◽  
Nicolas Spatola ◽  
Agnieszka Wykowska

Evidence from cognitive psychology showed that cultural differences influence human social cognition, leading to a different activation of social cognitive mechanisms. A growing corpus of literature in Human-Robot Interaction is investigating how culture shapes cognitive processes like anthropomorphism or mind attribution when humans face artificial agents, such as robots. The present paper aims at disentangling the relationship between cultural values, anthropomorphism, and intentionality attribution to robots, in the context of the intentional stance theory. We administered a battery of tests to 600 participants from various nations worldwide and modeled our data with a path model. Results showed a consistent direct influence of collectivism on anthropomorphism but not on the adoption of the intentional stance. Therefore, we further explored this result with a mediation analysis that revealed anthropomorphism as a true mediator between collectivism and the adoption of the intentional stance. We conclude that our findings extend previous literature by showing that the adoption of the intentional stance towards humanoid robots depends on anthropomorphic attribution in the context of cultural values.


Leonardo ◽  
2021 ◽  
pp. 1-13
Author(s):  
Ziggy O’Reilly ◽  
David Silvera-Tawil ◽  
Ionat Zurr ◽  
Diana Tan

Abstract Theory of Mind (ToM) —a social cognitive ability commonly under-developed in autistic individuals— is necessary to attribute mental states to oneself and others. Research into robot-assisted interventions to improve ToM ability in autistic children has become increasingly popular. However, no appropriate task currently exists to measure the degree of efficacy of robot-assisted interventions targeting ToM ability. In this paper, the authors demonstrate how animation techniques and principles can be leveraged to develop and produce videos of humanoid robots interacting, which could selectively measure ToM.


2021 ◽  
Vol 8 ◽  
Author(s):  
Serena Marchesi ◽  
Francesco Bossi ◽  
Davide Ghiglino ◽  
Davide De Tommaso ◽  
Agnieszka Wykowska

The presence of artificial agents in our everyday lives is continuously increasing. Hence, the question of how human social cognition mechanisms are activated in interactions with artificial agents, such as humanoid robots, is frequently being asked. One interesting question is whether humans perceive humanoid robots as mere artifacts (interpreting their behavior with reference to their function, thereby adopting the design stance) or as intentional agents (interpreting their behavior with reference to mental states, thereby adopting the intentional stance). Due to their humanlike appearance, humanoid robots might be capable of evoking the intentional stance. On the other hand, the knowledge that humanoid robots are only artifacts should call for adopting the design stance. Thus, observing a humanoid robot might evoke a cognitive conflict between the natural tendency of adopting the intentional stance and the knowledge about the actual nature of robots, which should elicit the design stance. In the present study, we investigated the cognitive conflict hypothesis by measuring participants’ pupil dilation during the completion of the InStance Test. Prior to each pupillary recording, participants were instructed to observe the humanoid robot iCub behaving in two different ways (either machine-like or humanlike behavior). Results showed that pupil dilation and response time patterns were predictive of individual biases in the adoption of the intentional or design stance in the IST. These results may suggest individual differences in mental effort and cognitive flexibility in reading and interpreting the behavior of an artificial agent.


2021 ◽  
Author(s):  
Serena Marchesi ◽  
Francesco Bossi ◽  
Davide Ghiglino ◽  
Davide De Tommaso ◽  
Agnieszka Wykowska

The presence of artificial agents in our everyday lives is continuously increasing. Hence, the question of how human social cognition mechanisms are activated when we interact with artificial agents, such as humanoid robots, has been asked. One interesting question is whether humans perceive humanoid robots as mere artefacts (interpreting their behavior with reference to their function, thereby adopting the design stance) or as intentional agents (interpreting their behavior with reference to mental states, thereby adopting the intentional stance). Due to their human-like appearance, humanoid robots might be capable of evoking the intentional stance. On the other hand, the knowledge that humanoid robots are only artefacts should call for adopting the design stance. Thus, observing a humanoid robot might evoke a cognitive conflict between the natural tendency of adopting the intentional stance and the knowledge about the actual nature of robots, which should elicit the design stance. In the present study, we investigated the cognitive conflict hypothesis measuring participants’ pupil dilation during the completion of InStance Test. Prior to each pupillary recording, participants were instructed to observe the humanoid robot iCub behaving in two different ways (either machine-like or human-like behavior). Results showed that pupil dilation and response time patterns were predictive of individual biases in the adoption of the intentional or design stance in the IST. These results may suggest individual differences in mental effort and cognitive flexibility in reading and interpreting the behavior of an artificial agent.


2020 ◽  
Author(s):  
Serena Marchesi ◽  
Nicolas Spatola ◽  
Jairo Pérez-Osorio ◽  
Agnieszka Wykowska

Humans interpret and predict behavior of others with reference to mental states or, in other words, by means of adopting the intentional stance. How to measure the likelihood to adopt the Intentional Stance towards humanoid robots still remain to be addressed. The present study investigated to what extent individuals adopt the intentional stance in explaining the behavior of two agents (a humanoid robot and a human). The present paradigm required participants to judge mentalistic or mechanistic descriptions as fitting or not to the displayed behaviors. We were able to measure their acceptance/rejection rate (as an explicit measure) and their response time (as an implicit measure). In addition, we examined the relationship between adopting the intentional stance and anthropomorphism. Our results show that at the explicit level (acceptance/rejection of the descriptions), participants are more likely to use mentalistic (compared to mechanistic) descriptions to explain other humans’ behavior. Conversely, when it comes to a humanoid robot, they are more likely to choose mechanistic descriptions. Interestingly, at the implicit level (response times), while for the human agent we found faster response time for the mentalistic descriptions, we found no difference in response times associated with the robotic agent. Furthermore, cluster analysis on the individual differences in anthropomorphism revealed that participants with a high tendency to anthropomorphize tend to accept faster the mentalistic description. In the light of these results, we argue that, at the implicit level, both stances are comparable in terms of “the best fit” to explain the behavior of a humanoid robot. Moreover, we argue that the decisional process on which stance is best to adopt towards a humanoid robot is influenced by individual differences of the observers, such as the tendency to anthropomorphize non-human agents.


2018 ◽  
Author(s):  
Serena Marchesi ◽  
Davide Ghiglino ◽  
Francesca Ciardo ◽  
Jairo Pérez-Osorio ◽  
Ebru Baykara ◽  
...  

In daily social interactions, we need to be able to navigate efficiently through our social environment. According to Dennett (1971), explaining and predicting others’ behaviour with reference to mental states (adopting the intentional stance) allows efficient social interaction. Today we also routinely interact with artificial agents: from Apple’s Siri to GPS navigation systems. In the near future, we might start casually interacting with robots. This paper addresses the question of whether adopting the intentional stance can also occur with respect to artificial agents. We propose a new tool to explore if people adopt the intentional stance towards an artificial agent (humanoid robot). The tool consists in a questionnaire that probes participants’ stance by requiring them to choose the likelihood of an explanation (mentalistic vs. mechanistic) of a behaviour of a robot iCub depicted in a naturalistic scenario (a sequence of photographs). The results of the first study conducted with this questionnaire showed that although the explanations were somewhat biased towards the mechanistic stance, a substantial number of mentalistic explanations were also given. This suggests that it is possible to induce adoption of the intentional stance towards artificial agents, at least in some contexts.


2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Agnieszka Wykowska

In our daily lives, we need to predict and understand others’ behaviour in order to navigate through our social environment. Predictions concerning other humans’ behaviour usually refer to their mental states, such as beliefs or intentions. Such a predictive strategy is called adoption of the intentional stance. In this paper, we review literature related to the concept of intentional stance from the perspectives of philosophy, psychology, human development, culture and human-robot interaction. We propose that adopting the intentional stance might be a central factor in facilitating social attunement with artificial agents. The paper first reviews the theoretical considerations regarding the intentional stance, and examines literature related to the development of intentional stance across the life span. Subsequently, it discusses cultural norms as grounded in the intentional stance and finally, it focuses on the issue of adopting the intentional stance towards artificial agents, such as humanoid robots. At the dawn of the artificial intelligence era, the question of how (and when) we predict and explain robots’ behaviour by referring to mental states is of high interest. The paper concludes with the discussion of the ethical consequences of robots towards which we adopt the intentional stance, and sketches future directions in research on this topic.


2021 ◽  
pp. 697-706
Author(s):  
Ziggy O’Reilly ◽  
Davide Ghiglino ◽  
Nicolas Spatola ◽  
Agnieszka Wykowska

2016 ◽  
Vol 371 (1693) ◽  
pp. 20150375 ◽  
Author(s):  
Agnieszka Wykowska ◽  
Thierry Chaminade ◽  
Gordon Cheng

In this paper, we propose that experimental protocols involving artificial agents, in particular the embodied humanoid robots, provide insightful information regarding social cognitive mechanisms in the human brain. Using artificial agents allows for manipulation and control of various parameters of behaviour, appearance and expressiveness in one of the interaction partners (the artificial agent), and for examining effect of these parameters on the other interaction partner (the human). At the same time, using artificial agents means introducing the presence of artificial, yet human-like, systems into the human social sphere. This allows for testing in a controlled, but ecologically valid, manner human fundamental mechanisms of social cognition both at the behavioural and at the neural level. This paper will review existing literature that reports studies in which artificial embodied agents have been used to study social cognition and will address the question of whether various mechanisms of social cognition (ranging from lower- to higher-order cognitive processes) are evoked by artificial agents to the same extent as by natural agents, humans in particular. Increasing the understanding of how behavioural and neural mechanisms of social cognition respond to artificial anthropomorphic agents provides empirical answers to the conundrum ‘What is a social agent?’


Sign in / Sign up

Export Citation Format

Share Document