Adopting the intentional stance towards humanoid robots
On the day by day humans need to predict and understand others’ behavior in order to efficiently navigate through our social environment. When making predictions about what others are going to do next, we refer to their mental states, such as beliefs or intentions. At the dawn of a new era, in which robots will be among us at homes and offices, one needs to ask whether (or when) we predict and also explain robots’ behavior with reference to mental states. In other words, do we adopt the intentional stance (Dennett in The Intentional Stance. MIT Press, Cambridge (1987) [1]) also towards artificial agents—especially those with humanoid shape and human-like behavior? What plays a role in adopting the intentional stance towards robots? Does adopting an intentional stance affect our social attunement with artificial agents? In this chapter, we first discuss the general approach that we take towards examining these questions—using objective methods of cognitive neuroscience to test social attunement as a function of adopting the intentional stance. Also, we describe our newly developed method to examine whether participants adopt the intentional stance towards an artificial agent. The chapter concludes with an outlook to the questions that still need to be addressed, like ethical consequences and societal impact of robots with which we attune socially, and towards which we adopt the intentional stance.