scholarly journals I Am Looking for Your Mind: Pupil Dilation Predicts Individual Differences in Sensitivity to Hints of Human-Likeness in Robot Behavior

2021 ◽  
Vol 8 ◽  
Author(s):  
Serena Marchesi ◽  
Francesco Bossi ◽  
Davide Ghiglino ◽  
Davide De Tommaso ◽  
Agnieszka Wykowska

The presence of artificial agents in our everyday lives is continuously increasing. Hence, the question of how human social cognition mechanisms are activated in interactions with artificial agents, such as humanoid robots, is frequently being asked. One interesting question is whether humans perceive humanoid robots as mere artifacts (interpreting their behavior with reference to their function, thereby adopting the design stance) or as intentional agents (interpreting their behavior with reference to mental states, thereby adopting the intentional stance). Due to their humanlike appearance, humanoid robots might be capable of evoking the intentional stance. On the other hand, the knowledge that humanoid robots are only artifacts should call for adopting the design stance. Thus, observing a humanoid robot might evoke a cognitive conflict between the natural tendency of adopting the intentional stance and the knowledge about the actual nature of robots, which should elicit the design stance. In the present study, we investigated the cognitive conflict hypothesis by measuring participants’ pupil dilation during the completion of the InStance Test. Prior to each pupillary recording, participants were instructed to observe the humanoid robot iCub behaving in two different ways (either machine-like or humanlike behavior). Results showed that pupil dilation and response time patterns were predictive of individual biases in the adoption of the intentional or design stance in the IST. These results may suggest individual differences in mental effort and cognitive flexibility in reading and interpreting the behavior of an artificial agent.

2021 ◽  
Author(s):  
Serena Marchesi ◽  
Francesco Bossi ◽  
Davide Ghiglino ◽  
Davide De Tommaso ◽  
Agnieszka Wykowska

The presence of artificial agents in our everyday lives is continuously increasing. Hence, the question of how human social cognition mechanisms are activated when we interact with artificial agents, such as humanoid robots, has been asked. One interesting question is whether humans perceive humanoid robots as mere artefacts (interpreting their behavior with reference to their function, thereby adopting the design stance) or as intentional agents (interpreting their behavior with reference to mental states, thereby adopting the intentional stance). Due to their human-like appearance, humanoid robots might be capable of evoking the intentional stance. On the other hand, the knowledge that humanoid robots are only artefacts should call for adopting the design stance. Thus, observing a humanoid robot might evoke a cognitive conflict between the natural tendency of adopting the intentional stance and the knowledge about the actual nature of robots, which should elicit the design stance. In the present study, we investigated the cognitive conflict hypothesis measuring participants’ pupil dilation during the completion of InStance Test. Prior to each pupillary recording, participants were instructed to observe the humanoid robot iCub behaving in two different ways (either machine-like or human-like behavior). Results showed that pupil dilation and response time patterns were predictive of individual biases in the adoption of the intentional or design stance in the IST. These results may suggest individual differences in mental effort and cognitive flexibility in reading and interpreting the behavior of an artificial agent.


2020 ◽  
Author(s):  
Serena Marchesi ◽  
Nicolas Spatola ◽  
Jairo Pérez-Osorio ◽  
Agnieszka Wykowska

Humans interpret and predict behavior of others with reference to mental states or, in other words, by means of adopting the intentional stance. How to measure the likelihood to adopt the Intentional Stance towards humanoid robots still remain to be addressed. The present study investigated to what extent individuals adopt the intentional stance in explaining the behavior of two agents (a humanoid robot and a human). The present paradigm required participants to judge mentalistic or mechanistic descriptions as fitting or not to the displayed behaviors. We were able to measure their acceptance/rejection rate (as an explicit measure) and their response time (as an implicit measure). In addition, we examined the relationship between adopting the intentional stance and anthropomorphism. Our results show that at the explicit level (acceptance/rejection of the descriptions), participants are more likely to use mentalistic (compared to mechanistic) descriptions to explain other humans’ behavior. Conversely, when it comes to a humanoid robot, they are more likely to choose mechanistic descriptions. Interestingly, at the implicit level (response times), while for the human agent we found faster response time for the mentalistic descriptions, we found no difference in response times associated with the robotic agent. Furthermore, cluster analysis on the individual differences in anthropomorphism revealed that participants with a high tendency to anthropomorphize tend to accept faster the mentalistic description. In the light of these results, we argue that, at the implicit level, both stances are comparable in terms of “the best fit” to explain the behavior of a humanoid robot. Moreover, we argue that the decisional process on which stance is best to adopt towards a humanoid robot is influenced by individual differences of the observers, such as the tendency to anthropomorphize non-human agents.


2018 ◽  
Author(s):  
Serena Marchesi ◽  
Davide Ghiglino ◽  
Francesca Ciardo ◽  
Jairo Pérez-Osorio ◽  
Ebru Baykara ◽  
...  

In daily social interactions, we need to be able to navigate efficiently through our social environment. According to Dennett (1971), explaining and predicting others’ behaviour with reference to mental states (adopting the intentional stance) allows efficient social interaction. Today we also routinely interact with artificial agents: from Apple’s Siri to GPS navigation systems. In the near future, we might start casually interacting with robots. This paper addresses the question of whether adopting the intentional stance can also occur with respect to artificial agents. We propose a new tool to explore if people adopt the intentional stance towards an artificial agent (humanoid robot). The tool consists in a questionnaire that probes participants’ stance by requiring them to choose the likelihood of an explanation (mentalistic vs. mechanistic) of a behaviour of a robot iCub depicted in a naturalistic scenario (a sequence of photographs). The results of the first study conducted with this questionnaire showed that although the explanations were somewhat biased towards the mechanistic stance, a substantial number of mentalistic explanations were also given. This suggests that it is possible to induce adoption of the intentional stance towards artificial agents, at least in some contexts.


2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Agnieszka Wykowska

In our daily lives, we need to predict and understand others’ behaviour in order to navigate through our social environment. Predictions concerning other humans’ behaviour usually refer to their mental states, such as beliefs or intentions. Such a predictive strategy is called adoption of the intentional stance. In this paper, we review literature related to the concept of intentional stance from the perspectives of philosophy, psychology, human development, culture and human-robot interaction. We propose that adopting the intentional stance might be a central factor in facilitating social attunement with artificial agents. The paper first reviews the theoretical considerations regarding the intentional stance, and examines literature related to the development of intentional stance across the life span. Subsequently, it discusses cultural norms as grounded in the intentional stance and finally, it focuses on the issue of adopting the intentional stance towards artificial agents, such as humanoid robots. At the dawn of the artificial intelligence era, the question of how (and when) we predict and explain robots’ behaviour by referring to mental states is of high interest. The paper concludes with the discussion of the ethical consequences of robots towards which we adopt the intentional stance, and sketches future directions in research on this topic.


2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Serena Marchesi ◽  
Davide Ghiglino ◽  
Melis Ince ◽  
Agnieszka Wykowska

Expectations about other’s behavior based on mental states modulate the way we interact with people. On the brink of the introduction of robots in our social environment, the question of whether humans would use the same strategy when interacting with artificial agents gain relevance. Recent research shows that people can adopt the mentalistic statement to explain the behavior of humanoid robots [1]. Adopting such a strategy might be mediated by the expectations that people have about robots and technology, among others. The present study aims to create a questionnaire to evaluate such expectations and to test whether these priors in fact modulate the adoption of the intentional stance. We found that people’s expectations directed to a particular robot platform have an influence on the adoption of mental state based explanations regarding an artificial agent. Lower expectations were associated with anxiety during interaction with robots and neuroticism. Meanwhile, high expectations are linked to feeling less discomfort when interacting with robots and a higher degree of openness. Our findings suggest that platform-directed expectations might also play a crucial role in HRI and in the adoption of intentional stance toward artificial agents.


2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Agnieszka Wykowska

On the day by day humans need to predict and understand others’ behavior in order to efficiently navigate through our social environment. When making predictions about what others are going to do next, we refer to their mental states, such as beliefs or intentions. At the dawn of a new era, in which robots will be among us at homes and offices, one needs to ask whether (or when) we predict and also explain robots’ behavior with reference to mental states. In other words, do we adopt the intentional stance (Dennett in The Intentional Stance. MIT Press, Cambridge (1987) [1]) also towards artificial agents—especially those with humanoid shape and human-like behavior? What plays a role in adopting the intentional stance towards robots? Does adopting an intentional stance affect our social attunement with artificial agents? In this chapter, we first discuss the general approach that we take towards examining these questions—using objective methods of cognitive neuroscience to test social attunement as a function of adopting the intentional stance. Also, we describe our newly developed method to examine whether participants adopt the intentional stance towards an artificial agent. The chapter concludes with an outlook to the questions that still need to be addressed, like ethical consequences and societal impact of robots with which we attune socially, and towards which we adopt the intentional stance.


2021 ◽  
Author(s):  
Ziggy O'Reilly ◽  
Davide Ghiglino ◽  
Nicolas Spatola ◽  
Agnieszka Wykowska

To enhance collaboration between humans and robots it might be important to trigger towards humanoid robots, similar social cognitive mechanisms that are triggered towards humans, such as the adoption of the intentional stance (i.e., explaining an agents behavior with reference to mental states). This study aimed (1) to measure whether a film modulates participants’ tendency to adopt the intentional stance toward a humanoid robot and; (2) to investigate whether autistic traits affects this adoption. We administered two subscales of the InStance Test (IST) (i.e. ‘isolated robot’ subscale and ‘social robot’ subscale) before and after participants watched a film depicting an interaction between a humanoid robot and a human. On the isolated robot subscale, individuals with low autistic traits were more likely to adopt the intentional stance towards a humanoid robot after they watched the film, but there was no effect on individuals with high autistic traits.On the social robot subscale (i.e.when the robot is interactingwith a human) both individuals with low and high autistic traits decreased in their adoption of the intentional stance after they watched the film. This suggests that the content of the narrative and an individual’s social cognitive abilities, affects the degree to which the intentional stance towards a humanoid robot is adopted.


2021 ◽  
Author(s):  
Serena Marchesi ◽  
Davide De Tommaso ◽  
Jairo Pérez-Osorio ◽  
Agnieszka Wykowska

Humans interpret and predict others’ behaviors by ascribing them intentions or beliefs, or in other words, by adopting the intentional stance. Since artificial agents are increasingly populating our daily environments, the question arises whether (and under which conditions) humans would apply the “human-model” to understand the behaviors of these new social agents. Thus, in a series of three experiments we tested whether embedding humans in a social interaction with a humanoid robot either displaying a human-like or machine-like behavior, would modulate their initial bias towards adopting the intentional stance. Results showed that indeed humans are more prone to adopt the intentional stance after having interacted with a more socially available and human-like robot, while no modulation of the adoption of the intentional stance emerged towards a mechanistic robot. We conclude that short experiences with humanoid robots presumably inducing a “like-me” impression and social bonding increase the likelihood of adopting the intentional stance.


2019 ◽  
Author(s):  
Elef Schellen ◽  
Agnieszka Wykowska

Natural and effective interaction with humanoid robots should involve social cognitive mechanisms of the human brain that normally facilitate social interaction between humans. Recent research has indicated that the presence and efficiency of these mechanisms in human-robot interaction (HRI) might be contingent on the adoption of a set of attitudes, mindsets and beliefs concerning the robot’s inner machinery. Current research is investigating the factors that influence these mindsets, and how they affect HRI. This review focuses on a specific mindset, namely the “intentional mindset” in which intentionality is attributed to another agent. More specifically, we focus on the concept of adopting the intentional stance towards robots, i.e., the tendency to predict and explain the robots’ behavior with reference to mental states. We discuss the relationship between adoption of intentional stance and lower-level mechanisms of social cognition, and we provide a critical evaluation of research methods currently employed in this field, highlighting common pitfalls in the measurement of attitudes and mindsets.


2020 ◽  
Author(s):  
Davide Ghiglino ◽  
Davide De Tommaso ◽  
Cesco Willemse ◽  
Serena Marchesi ◽  
Agnieszka Wykowska

Designing artificial agents that can closely imitate human behavior, might influence humans in perceiving them as intentional agents. Nonetheless, the factors that are crucial for an artificial agent to be perceived as an animated and anthropomorphic being still need to be addressed. In the current study, we investigated some of the factors that might affect the perception of a robot's behavior as human-like or intentional. To meet this aim, seventy-nine participants were exposed to two different behaviors of a humanoid robot under two different instructions. Before the experiment, participants' biases towards robotics as well as their personality traits were assessed. Our results suggest that participants’ sensitivity to human-likeness relies more on their expectations rather than on perceptual cues.


Sign in / Sign up

Export Citation Format

Share Document