scholarly journals Adopting the intentional stance toward natural and artificial agents

2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Agnieszka Wykowska

In our daily lives, we need to predict and understand others’ behaviour in order to navigate through our social environment. Predictions concerning other humans’ behaviour usually refer to their mental states, such as beliefs or intentions. Such a predictive strategy is called adoption of the intentional stance. In this paper, we review literature related to the concept of intentional stance from the perspectives of philosophy, psychology, human development, culture and human-robot interaction. We propose that adopting the intentional stance might be a central factor in facilitating social attunement with artificial agents. The paper first reviews the theoretical considerations regarding the intentional stance, and examines literature related to the development of intentional stance across the life span. Subsequently, it discusses cultural norms as grounded in the intentional stance and finally, it focuses on the issue of adopting the intentional stance towards artificial agents, such as humanoid robots. At the dawn of the artificial intelligence era, the question of how (and when) we predict and explain robots’ behaviour by referring to mental states is of high interest. The paper concludes with the discussion of the ethical consequences of robots towards which we adopt the intentional stance, and sketches future directions in research on this topic.

2020 ◽  
Author(s):  
Agnieszka Wykowska ◽  
Jairo Pérez-Osorio ◽  
Stefan Kopp

This booklet is a collection of the position statements accepted for the HRI’20 conference workshop “Social Cognition for HRI: Exploring the relationship between mindreading and social attunement in human-robot interaction” (Wykowska, Perez-Osorio & Kopp, 2020). Unfortunately, due to the rapid unfolding of the novel coronavirus at the beginning of the present year, the conference and consequently our workshop, were canceled. On the light of these events, we decided to put together the positions statements accepted for the workshop. The contributions collected in these pages highlight the role of attribution of mental states to artificial agents in human-robot interaction, and precisely the quality and presence of social attunement mechanisms that are known to make human interaction smooth, efficient, and robust. These papers also accentuate the importance of the multidisciplinary approach to advance the understanding of the factors and the consequences of social interactions with artificial agents.


2019 ◽  
Author(s):  
Elef Schellen ◽  
Agnieszka Wykowska

Natural and effective interaction with humanoid robots should involve social cognitive mechanisms of the human brain that normally facilitate social interaction between humans. Recent research has indicated that the presence and efficiency of these mechanisms in human-robot interaction (HRI) might be contingent on the adoption of a set of attitudes, mindsets and beliefs concerning the robot’s inner machinery. Current research is investigating the factors that influence these mindsets, and how they affect HRI. This review focuses on a specific mindset, namely the “intentional mindset” in which intentionality is attributed to another agent. More specifically, we focus on the concept of adopting the intentional stance towards robots, i.e., the tendency to predict and explain the robots’ behavior with reference to mental states. We discuss the relationship between adoption of intentional stance and lower-level mechanisms of social cognition, and we provide a critical evaluation of research methods currently employed in this field, highlighting common pitfalls in the measurement of attitudes and mindsets.


2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Serena Marchesi ◽  
Davide Ghiglino ◽  
Melis Ince ◽  
Agnieszka Wykowska

Expectations about other’s behavior based on mental states modulate the way we interact with people. On the brink of the introduction of robots in our social environment, the question of whether humans would use the same strategy when interacting with artificial agents gain relevance. Recent research shows that people can adopt the mentalistic statement to explain the behavior of humanoid robots [1]. Adopting such a strategy might be mediated by the expectations that people have about robots and technology, among others. The present study aims to create a questionnaire to evaluate such expectations and to test whether these priors in fact modulate the adoption of the intentional stance. We found that people’s expectations directed to a particular robot platform have an influence on the adoption of mental state based explanations regarding an artificial agent. Lower expectations were associated with anxiety during interaction with robots and neuroticism. Meanwhile, high expectations are linked to feeling less discomfort when interacting with robots and a higher degree of openness. Our findings suggest that platform-directed expectations might also play a crucial role in HRI and in the adoption of intentional stance toward artificial agents.


2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Agnieszka Wykowska

On the day by day humans need to predict and understand others’ behavior in order to efficiently navigate through our social environment. When making predictions about what others are going to do next, we refer to their mental states, such as beliefs or intentions. At the dawn of a new era, in which robots will be among us at homes and offices, one needs to ask whether (or when) we predict and also explain robots’ behavior with reference to mental states. In other words, do we adopt the intentional stance (Dennett in The Intentional Stance. MIT Press, Cambridge (1987) [1]) also towards artificial agents—especially those with humanoid shape and human-like behavior? What plays a role in adopting the intentional stance towards robots? Does adopting an intentional stance affect our social attunement with artificial agents? In this chapter, we first discuss the general approach that we take towards examining these questions—using objective methods of cognitive neuroscience to test social attunement as a function of adopting the intentional stance. Also, we describe our newly developed method to examine whether participants adopt the intentional stance towards an artificial agent. The chapter concludes with an outlook to the questions that still need to be addressed, like ethical consequences and societal impact of robots with which we attune socially, and towards which we adopt the intentional stance.


2021 ◽  
Vol 8 ◽  
Author(s):  
Serena Marchesi ◽  
Francesco Bossi ◽  
Davide Ghiglino ◽  
Davide De Tommaso ◽  
Agnieszka Wykowska

The presence of artificial agents in our everyday lives is continuously increasing. Hence, the question of how human social cognition mechanisms are activated in interactions with artificial agents, such as humanoid robots, is frequently being asked. One interesting question is whether humans perceive humanoid robots as mere artifacts (interpreting their behavior with reference to their function, thereby adopting the design stance) or as intentional agents (interpreting their behavior with reference to mental states, thereby adopting the intentional stance). Due to their humanlike appearance, humanoid robots might be capable of evoking the intentional stance. On the other hand, the knowledge that humanoid robots are only artifacts should call for adopting the design stance. Thus, observing a humanoid robot might evoke a cognitive conflict between the natural tendency of adopting the intentional stance and the knowledge about the actual nature of robots, which should elicit the design stance. In the present study, we investigated the cognitive conflict hypothesis by measuring participants’ pupil dilation during the completion of the InStance Test. Prior to each pupillary recording, participants were instructed to observe the humanoid robot iCub behaving in two different ways (either machine-like or humanlike behavior). Results showed that pupil dilation and response time patterns were predictive of individual biases in the adoption of the intentional or design stance in the IST. These results may suggest individual differences in mental effort and cognitive flexibility in reading and interpreting the behavior of an artificial agent.


2021 ◽  
Author(s):  
Nicolas Spatola ◽  
Serena Marchesi ◽  
Agnieszka Wykowska

In human-robot interactions, people tend to attribute them mental states such as intentionality to make sense of their behaviour:the intentional stance. These inferences deeply influence how one will consider, engage and behave towards robots. However,people highly differ in their likelihood to adopt this intentional stance. Therefore it seems crucial to assess these interindividualdifferences to better evaluate and understand human-robot interactions. In two studies we developed and validated the structureof a task aiming at evaluating to what extent people adopt the intentional stance toward robots. The method consists in a taskthat probes participants’ stance by requiring them to choose the likelihood of an explanation (mentalistic vs. mechanistic) ofbehaviour of a robot depicted in a naturalistic scenario. Results showed a reliable psychometric structure of the present task toevaluate the mentalistic bias of participants as a proxy of the intentional stance. We further discuss the importance of consideringthese interindividual differences in human-robot interactions studies and social robotics


2018 ◽  
Vol 226 (2) ◽  
pp. 98-109 ◽  
Author(s):  
Antonella Marchetti ◽  
Federico Manzi ◽  
Shoji Itakura ◽  
Davide Massaro

Abstract. This review focuses on some relevant issues concerning the relationship between theory of mind (ToM) and humanoid robots. Humanoid robots are employed in different everyday-life contexts, so it seems relevant to question whether the relationships between human beings and humanoids can be characterized by a mode of interaction typical of the relationships between human beings, that is, the attribution of mental states. Because ToM development continuously undergoes changes from early childhood to late adulthood, we adopted a lifespan perspective. We analyzed contributions from the literature by organizing them around the partition between “mental states and actions” and “human-like features.” Finally, we considered how studying human–robot interaction, within a ToM context, can contribute to our understanding of the intersubjective nature of this interaction.


2021 ◽  
Author(s):  
Serena Marchesi ◽  
Francesco Bossi ◽  
Davide Ghiglino ◽  
Davide De Tommaso ◽  
Agnieszka Wykowska

The presence of artificial agents in our everyday lives is continuously increasing. Hence, the question of how human social cognition mechanisms are activated when we interact with artificial agents, such as humanoid robots, has been asked. One interesting question is whether humans perceive humanoid robots as mere artefacts (interpreting their behavior with reference to their function, thereby adopting the design stance) or as intentional agents (interpreting their behavior with reference to mental states, thereby adopting the intentional stance). Due to their human-like appearance, humanoid robots might be capable of evoking the intentional stance. On the other hand, the knowledge that humanoid robots are only artefacts should call for adopting the design stance. Thus, observing a humanoid robot might evoke a cognitive conflict between the natural tendency of adopting the intentional stance and the knowledge about the actual nature of robots, which should elicit the design stance. In the present study, we investigated the cognitive conflict hypothesis measuring participants’ pupil dilation during the completion of InStance Test. Prior to each pupillary recording, participants were instructed to observe the humanoid robot iCub behaving in two different ways (either machine-like or human-like behavior). Results showed that pupil dilation and response time patterns were predictive of individual biases in the adoption of the intentional or design stance in the IST. These results may suggest individual differences in mental effort and cognitive flexibility in reading and interpreting the behavior of an artificial agent.


2018 ◽  
Author(s):  
Serena Marchesi ◽  
Davide Ghiglino ◽  
Francesca Ciardo ◽  
Jairo Pérez-Osorio ◽  
Ebru Baykara ◽  
...  

In daily social interactions, we need to be able to navigate efficiently through our social environment. According to Dennett (1971), explaining and predicting others’ behaviour with reference to mental states (adopting the intentional stance) allows efficient social interaction. Today we also routinely interact with artificial agents: from Apple’s Siri to GPS navigation systems. In the near future, we might start casually interacting with robots. This paper addresses the question of whether adopting the intentional stance can also occur with respect to artificial agents. We propose a new tool to explore if people adopt the intentional stance towards an artificial agent (humanoid robot). The tool consists in a questionnaire that probes participants’ stance by requiring them to choose the likelihood of an explanation (mentalistic vs. mechanistic) of a behaviour of a robot iCub depicted in a naturalistic scenario (a sequence of photographs). The results of the first study conducted with this questionnaire showed that although the explanations were somewhat biased towards the mechanistic stance, a substantial number of mentalistic explanations were also given. This suggests that it is possible to induce adoption of the intentional stance towards artificial agents, at least in some contexts.


Author(s):  
Giorgio Metta

This chapter outlines a number of research lines that, starting from the observation of nature, attempt to mimic human behavior in humanoid robots. Humanoid robotics is one of the most exciting proving grounds for the development of biologically inspired hardware and software—machines that try to recreate billions of years of evolution with some of the abilities and characteristics of living beings. Humanoids could be especially useful for their ability to “live” in human-populated environments, occupying the same physical space as people and using tools that have been designed for people. Natural human–robot interaction is also an important facet of humanoid research. Finally, learning and adapting from experience, the hallmark of human intelligence, may require some approximation to the human body in order to attain similar capacities to humans. This chapter focuses particularly on compliant actuation, soft robotics, biomimetic robot vision, robot touch, and brain-inspired motor control in the context of the iCub humanoid robot.


Sign in / Sign up

Export Citation Format

Share Document