Human vs Humanoid. A behavioral investigation of the individual tendency to adopt the intentional stance
Humans interpret and predict behavior of others with reference to mental states or, in other words, by means of adopting the intentional stance. How to measure the likelihood to adopt the Intentional Stance towards humanoid robots still remain to be addressed. The present study investigated to what extent individuals adopt the intentional stance in explaining the behavior of two agents (a humanoid robot and a human). The present paradigm required participants to judge mentalistic or mechanistic descriptions as fitting or not to the displayed behaviors. We were able to measure their acceptance/rejection rate (as an explicit measure) and their response time (as an implicit measure). In addition, we examined the relationship between adopting the intentional stance and anthropomorphism. Our results show that at the explicit level (acceptance/rejection of the descriptions), participants are more likely to use mentalistic (compared to mechanistic) descriptions to explain other humans’ behavior. Conversely, when it comes to a humanoid robot, they are more likely to choose mechanistic descriptions. Interestingly, at the implicit level (response times), while for the human agent we found faster response time for the mentalistic descriptions, we found no difference in response times associated with the robotic agent. Furthermore, cluster analysis on the individual differences in anthropomorphism revealed that participants with a high tendency to anthropomorphize tend to accept faster the mentalistic description. In the light of these results, we argue that, at the implicit level, both stances are comparable in terms of “the best fit” to explain the behavior of a humanoid robot. Moreover, we argue that the decisional process on which stance is best to adopt towards a humanoid robot is influenced by individual differences of the observers, such as the tendency to anthropomorphize non-human agents.