scholarly journals The Intentional Stance Test-2: How to Measure the Tendency to Adopt Intentional Stance Towards Robots

2021 ◽  
Vol 8 ◽  
Author(s):  
Nicolas Spatola ◽  
Serena Marchesi ◽  
Agnieszka Wykowska

In human-robot interactions, people tend to attribute to robots mental states such as intentions or desires, in order to make sense of their behaviour. This cognitive strategy is termed “intentional stance”. Adopting the intentional stance influences how one will consider, engage and behave towards robots. However, people differ in their likelihood to adopt intentional stance towards robots. Therefore, it seems crucial to assess these interindividual differences. In two studies we developed and validated the structure of a task aiming at evaluating to what extent people adopt intentional stance towards robot actions, the Intentional Stance task (IST). The Intentional Stance Task consists in a task that probes participants’ stance by requiring them to choose the plausibility of a description (mentalistic vs. mechanistic) of behaviour of a robot depicted in a scenario composed of three photographs. Results showed a reliable psychometric structure of the IST. This paper therefore concludes with the proposal of using the IST as a proxy for assessing the degree of adoption of the intentional stance towards robots.

2021 ◽  
Author(s):  
Nicolas Spatola ◽  
Serena Marchesi ◽  
Agnieszka Wykowska

In human-robot interactions, people tend to attribute them mental states such as intentionality to make sense of their behaviour:the intentional stance. These inferences deeply influence how one will consider, engage and behave towards robots. However,people highly differ in their likelihood to adopt this intentional stance. Therefore it seems crucial to assess these interindividualdifferences to better evaluate and understand human-robot interactions. In two studies we developed and validated the structureof a task aiming at evaluating to what extent people adopt the intentional stance toward robots. The method consists in a taskthat probes participants’ stance by requiring them to choose the likelihood of an explanation (mentalistic vs. mechanistic) ofbehaviour of a robot depicted in a naturalistic scenario. Results showed a reliable psychometric structure of the present task toevaluate the mentalistic bias of participants as a proxy of the intentional stance. We further discuss the importance of consideringthese interindividual differences in human-robot interactions studies and social robotics


Author(s):  
David Rosenthal

Dennett’s account of consciousness starts from third-person considerations. I argue this is wise, since beginning with first-person access precludes accommodating the third-person access we have to others’ mental states. But Dennett’s first-person operationalism, which seeks to save the first person in third-person, operationalist terms, denies the occurrence of folk-psychological states that one doesn’t believe oneself to be in, and so the occurrence of folk-psychological states that aren’t conscious. This conflicts with Dennett’s intentional-stance approach to the mental, on which we discern others’ mental states independently of those states’ being conscious. We can avoid this conflict with a higher-order theory of consciousness, which saves the spirit of Dennett’s approach, but enables us to distinguish conscious folk-psychological states from nonconscious ones. The intentional stance by itself can’t do this, since it can’t discern a higher-order awareness of a psychological state. But we can supplement the intentional stance with the higher-order theoretical apparatus.


Author(s):  
Nathan Caruana ◽  
Dean Spirou ◽  
Jon Brock

In recent years, with the emergence of relatively inexpensive and accessible virtual reality technologies, it is now possible to deliver compelling and realistic simulations of human-to-human interaction. Neuroimaging studies have shown that, when participants believe they are interacting via a virtual interface with another human agent, they show different patterns of brain activity compared to when they know that their virtual partner is computer-controlled. The suggestion is that users adopt an “intentional stance” by attributing mental states to their virtual partner. However, it remains unclear how beliefs in the agency of a virtual partner influence participants’ behaviour and subjective experience of the interaction. We investigated this issue in the context of a cooperative “joint attention” game in which participants interacted via an eye tracker with a virtual onscreen partner, directing each other’s eye gaze to different screen locations. Half of the participants were correctly informed that their partner was controlled by a computer algorithm (“Computer” condition). The other half were misled into believing that the virtual character was controlled by a second participant in another room (“Human” condition). Those in the “Human” condition were slower to make eye contact with their partner and more likely to try and guide their partner before they had established mutual eye contact than participants in the “Computer” condition. They also responded more rapidly when their partner was guiding them, although the same effect was also found for a control condition in which they responded to an arrow cue. Results confirm the influence of human agency beliefs on behaviour in this virtual social interaction context. They further suggest that researchers and developers attempting to simulate social interactions should consider the impact of agency beliefs on user experience in other social contexts, and their effect on the achievement of the application’s goals.


2021 ◽  
Author(s):  
Ram Isaac Orr ◽  
michael gilead

Attribution of mental states to self and others, i.e., mentalizing, is central to human life. Current measures are lacking in ability to directly gauge the extent of individuals engage in spontaneous mentalizing. Focusing on natural language use as an expression of inner psychological processes, we developed the Mental-Physical Verb Norms (MPVN). These norms are participant-derived ratings of the extent to which common verbs reflect mental (opposite physical) activities and occurrences, covering ~80% of all verbs appearing within a given English text. Content validity was assessed against existing expert-compiled dictionaries of mental states and cognitive processes, as well as against normative ratings of verb concreteness. Criterion Validity was assessed through natural text analysis of internet comments relating to mental health vs. physical health. Results showcase the unique contribution of the MPVN ratings as a measure of the degree to which individuals adopt the intentional stance in describing targets, by describing both self and others in mental, opposite physical terms. We discuss potential uses for future research across various psychological and neurocognitive disciplines.


2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Agnieszka Wykowska

In our daily lives, we need to predict and understand others’ behaviour in order to navigate through our social environment. Predictions concerning other humans’ behaviour usually refer to their mental states, such as beliefs or intentions. Such a predictive strategy is called adoption of the intentional stance. In this paper, we review literature related to the concept of intentional stance from the perspectives of philosophy, psychology, human development, culture and human-robot interaction. We propose that adopting the intentional stance might be a central factor in facilitating social attunement with artificial agents. The paper first reviews the theoretical considerations regarding the intentional stance, and examines literature related to the development of intentional stance across the life span. Subsequently, it discusses cultural norms as grounded in the intentional stance and finally, it focuses on the issue of adopting the intentional stance towards artificial agents, such as humanoid robots. At the dawn of the artificial intelligence era, the question of how (and when) we predict and explain robots’ behaviour by referring to mental states is of high interest. The paper concludes with the discussion of the ethical consequences of robots towards which we adopt the intentional stance, and sketches future directions in research on this topic.


1994 ◽  
Vol 26 (76-77) ◽  
pp. 205-227
Author(s):  
Salma Saab

This article assesses Dennett’s position with respect to mental states, intermediate between the extremes of physicalism and intentionalism. Dennett concentrates most of his theses with respect to our atribution of mental states to others, or to other systems, on what he calls the intentional stance. One of his main claims is that, ontologically speaking, mental states as such do not exist but that nevertheless they do have some sort of reality. The elimination of mental entities as “abstracta”. These “abstracta” play an important role in our explanation of what people do and of the way in which certain systems are designed. Dennett assumes the correctness of Quine’s indeterminacy thesis of translation, which leads Quine to reject the existence of facts of the matter on the one hand and to admit the pragmatic value of certain non-physical explanations on the other. In the article, an attempt is made to clarify Quine’s thesis and the way in which Dennett’s use of the indeterminacy thesis differs from Davidson’s. It is suggested that one can make certain analogies between Dennett’s proposal and Wittgenstein’s use of the term “seeing as”. The analogy with “seeing as” has the advantage of preserving Dennett’s main claims, while eliminating the use for “abstracta”, thus avoiding the discomfort that some philosophers have felt with regard to “abstracta”. The identification of characteristics common to mental discourse and “seeing as” allows the author to make sense of the claim that there are certain aspects of things or of situations, such as patterns, which while they properly belong to the things or situations themselves, nevertheless depend for their recognition on the skills of the observer.


2019 ◽  
Author(s):  
Elef Schellen ◽  
Agnieszka Wykowska

Natural and effective interaction with humanoid robots should involve social cognitive mechanisms of the human brain that normally facilitate social interaction between humans. Recent research has indicated that the presence and efficiency of these mechanisms in human-robot interaction (HRI) might be contingent on the adoption of a set of attitudes, mindsets and beliefs concerning the robot’s inner machinery. Current research is investigating the factors that influence these mindsets, and how they affect HRI. This review focuses on a specific mindset, namely the “intentional mindset” in which intentionality is attributed to another agent. More specifically, we focus on the concept of adopting the intentional stance towards robots, i.e., the tendency to predict and explain the robots’ behavior with reference to mental states. We discuss the relationship between adoption of intentional stance and lower-level mechanisms of social cognition, and we provide a critical evaluation of research methods currently employed in this field, highlighting common pitfalls in the measurement of attitudes and mindsets.


2015 ◽  
Vol 27 (6) ◽  
pp. 1116-1124 ◽  
Author(s):  
Robert P. Spunt ◽  
Meghan L. Meyer ◽  
Matthew D. Lieberman

Humans readily adopt an intentional stance to other people, comprehending their behavior as guided by unobservable mental states such as belief, desire, and intention. We used fMRI in healthy adults to test the hypothesis that this stance is primed by the default mode of human brain function present when the mind is at rest. We report three findings that support this hypothesis. First, brain regions activated by actively adopting an intentional rather than nonintentional stance to a social stimulus were anatomically similar to those demonstrating default responses to fixation baseline in the same task. Second, moment-to-moment variation in default activity during fixation in the dorsomedial PFC was related to the ease with which participants applied an intentional—but not nonintentional—stance to a social stimulus presented moments later. Finally, individuals who showed stronger dorsomedial PFC activity at baseline in a separate task were generally more efficient when adopting the intentional stance and reported having greater social skills. These results identify a biological basis for the human tendency to adopt the intentional stance. More broadly, they suggest that the brain's default mode of function may have evolved, in part, as a response to life in a social world.


2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Serena Marchesi ◽  
Davide Ghiglino ◽  
Melis Ince ◽  
Agnieszka Wykowska

Expectations about other’s behavior based on mental states modulate the way we interact with people. On the brink of the introduction of robots in our social environment, the question of whether humans would use the same strategy when interacting with artificial agents gain relevance. Recent research shows that people can adopt the mentalistic statement to explain the behavior of humanoid robots [1]. Adopting such a strategy might be mediated by the expectations that people have about robots and technology, among others. The present study aims to create a questionnaire to evaluate such expectations and to test whether these priors in fact modulate the adoption of the intentional stance. We found that people’s expectations directed to a particular robot platform have an influence on the adoption of mental state based explanations regarding an artificial agent. Lower expectations were associated with anxiety during interaction with robots and neuroticism. Meanwhile, high expectations are linked to feeling less discomfort when interacting with robots and a higher degree of openness. Our findings suggest that platform-directed expectations might also play a crucial role in HRI and in the adoption of intentional stance toward artificial agents.


2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Agnieszka Wykowska

On the day by day humans need to predict and understand others’ behavior in order to efficiently navigate through our social environment. When making predictions about what others are going to do next, we refer to their mental states, such as beliefs or intentions. At the dawn of a new era, in which robots will be among us at homes and offices, one needs to ask whether (or when) we predict and also explain robots’ behavior with reference to mental states. In other words, do we adopt the intentional stance (Dennett in The Intentional Stance. MIT Press, Cambridge (1987) [1]) also towards artificial agents—especially those with humanoid shape and human-like behavior? What plays a role in adopting the intentional stance towards robots? Does adopting an intentional stance affect our social attunement with artificial agents? In this chapter, we first discuss the general approach that we take towards examining these questions—using objective methods of cognitive neuroscience to test social attunement as a function of adopting the intentional stance. Also, we describe our newly developed method to examine whether participants adopt the intentional stance towards an artificial agent. The chapter concludes with an outlook to the questions that still need to be addressed, like ethical consequences and societal impact of robots with which we attune socially, and towards which we adopt the intentional stance.


Sign in / Sign up

Export Citation Format

Share Document