scholarly journals Human vs Humanoid. A behavioral investigation of the individual tendency to adopt the intentional stance

2020 ◽  
Author(s):  
Serena Marchesi ◽  
Nicolas Spatola ◽  
Jairo Pérez-Osorio ◽  
Agnieszka Wykowska

Humans interpret and predict behavior of others with reference to mental states or, in other words, by means of adopting the intentional stance. How to measure the likelihood to adopt the Intentional Stance towards humanoid robots still remain to be addressed. The present study investigated to what extent individuals adopt the intentional stance in explaining the behavior of two agents (a humanoid robot and a human). The present paradigm required participants to judge mentalistic or mechanistic descriptions as fitting or not to the displayed behaviors. We were able to measure their acceptance/rejection rate (as an explicit measure) and their response time (as an implicit measure). In addition, we examined the relationship between adopting the intentional stance and anthropomorphism. Our results show that at the explicit level (acceptance/rejection of the descriptions), participants are more likely to use mentalistic (compared to mechanistic) descriptions to explain other humans’ behavior. Conversely, when it comes to a humanoid robot, they are more likely to choose mechanistic descriptions. Interestingly, at the implicit level (response times), while for the human agent we found faster response time for the mentalistic descriptions, we found no difference in response times associated with the robotic agent. Furthermore, cluster analysis on the individual differences in anthropomorphism revealed that participants with a high tendency to anthropomorphize tend to accept faster the mentalistic description. In the light of these results, we argue that, at the implicit level, both stances are comparable in terms of “the best fit” to explain the behavior of a humanoid robot. Moreover, we argue that the decisional process on which stance is best to adopt towards a humanoid robot is influenced by individual differences of the observers, such as the tendency to anthropomorphize non-human agents.

2021 ◽  
Vol 8 ◽  
Author(s):  
Serena Marchesi ◽  
Francesco Bossi ◽  
Davide Ghiglino ◽  
Davide De Tommaso ◽  
Agnieszka Wykowska

The presence of artificial agents in our everyday lives is continuously increasing. Hence, the question of how human social cognition mechanisms are activated in interactions with artificial agents, such as humanoid robots, is frequently being asked. One interesting question is whether humans perceive humanoid robots as mere artifacts (interpreting their behavior with reference to their function, thereby adopting the design stance) or as intentional agents (interpreting their behavior with reference to mental states, thereby adopting the intentional stance). Due to their humanlike appearance, humanoid robots might be capable of evoking the intentional stance. On the other hand, the knowledge that humanoid robots are only artifacts should call for adopting the design stance. Thus, observing a humanoid robot might evoke a cognitive conflict between the natural tendency of adopting the intentional stance and the knowledge about the actual nature of robots, which should elicit the design stance. In the present study, we investigated the cognitive conflict hypothesis by measuring participants’ pupil dilation during the completion of the InStance Test. Prior to each pupillary recording, participants were instructed to observe the humanoid robot iCub behaving in two different ways (either machine-like or humanlike behavior). Results showed that pupil dilation and response time patterns were predictive of individual biases in the adoption of the intentional or design stance in the IST. These results may suggest individual differences in mental effort and cognitive flexibility in reading and interpreting the behavior of an artificial agent.


2021 ◽  
Author(s):  
Serena Marchesi ◽  
Francesco Bossi ◽  
Davide Ghiglino ◽  
Davide De Tommaso ◽  
Agnieszka Wykowska

The presence of artificial agents in our everyday lives is continuously increasing. Hence, the question of how human social cognition mechanisms are activated when we interact with artificial agents, such as humanoid robots, has been asked. One interesting question is whether humans perceive humanoid robots as mere artefacts (interpreting their behavior with reference to their function, thereby adopting the design stance) or as intentional agents (interpreting their behavior with reference to mental states, thereby adopting the intentional stance). Due to their human-like appearance, humanoid robots might be capable of evoking the intentional stance. On the other hand, the knowledge that humanoid robots are only artefacts should call for adopting the design stance. Thus, observing a humanoid robot might evoke a cognitive conflict between the natural tendency of adopting the intentional stance and the knowledge about the actual nature of robots, which should elicit the design stance. In the present study, we investigated the cognitive conflict hypothesis measuring participants’ pupil dilation during the completion of InStance Test. Prior to each pupillary recording, participants were instructed to observe the humanoid robot iCub behaving in two different ways (either machine-like or human-like behavior). Results showed that pupil dilation and response time patterns were predictive of individual biases in the adoption of the intentional or design stance in the IST. These results may suggest individual differences in mental effort and cognitive flexibility in reading and interpreting the behavior of an artificial agent.


2021 ◽  
Author(s):  
Ziggy O'Reilly ◽  
Davide Ghiglino ◽  
Nicolas Spatola ◽  
Agnieszka Wykowska

To enhance collaboration between humans and robots it might be important to trigger towards humanoid robots, similar social cognitive mechanisms that are triggered towards humans, such as the adoption of the intentional stance (i.e., explaining an agents behavior with reference to mental states). This study aimed (1) to measure whether a film modulates participants’ tendency to adopt the intentional stance toward a humanoid robot and; (2) to investigate whether autistic traits affects this adoption. We administered two subscales of the InStance Test (IST) (i.e. ‘isolated robot’ subscale and ‘social robot’ subscale) before and after participants watched a film depicting an interaction between a humanoid robot and a human. On the isolated robot subscale, individuals with low autistic traits were more likely to adopt the intentional stance towards a humanoid robot after they watched the film, but there was no effect on individuals with high autistic traits.On the social robot subscale (i.e.when the robot is interactingwith a human) both individuals with low and high autistic traits decreased in their adoption of the intentional stance after they watched the film. This suggests that the content of the narrative and an individual’s social cognitive abilities, affects the degree to which the intentional stance towards a humanoid robot is adopted.


2018 ◽  
Author(s):  
Serena Marchesi ◽  
Davide Ghiglino ◽  
Francesca Ciardo ◽  
Jairo Pérez-Osorio ◽  
Ebru Baykara ◽  
...  

In daily social interactions, we need to be able to navigate efficiently through our social environment. According to Dennett (1971), explaining and predicting others’ behaviour with reference to mental states (adopting the intentional stance) allows efficient social interaction. Today we also routinely interact with artificial agents: from Apple’s Siri to GPS navigation systems. In the near future, we might start casually interacting with robots. This paper addresses the question of whether adopting the intentional stance can also occur with respect to artificial agents. We propose a new tool to explore if people adopt the intentional stance towards an artificial agent (humanoid robot). The tool consists in a questionnaire that probes participants’ stance by requiring them to choose the likelihood of an explanation (mentalistic vs. mechanistic) of a behaviour of a robot iCub depicted in a naturalistic scenario (a sequence of photographs). The results of the first study conducted with this questionnaire showed that although the explanations were somewhat biased towards the mechanistic stance, a substantial number of mentalistic explanations were also given. This suggests that it is possible to induce adoption of the intentional stance towards artificial agents, at least in some contexts.


2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Agnieszka Wykowska

In our daily lives, we need to predict and understand others’ behaviour in order to navigate through our social environment. Predictions concerning other humans’ behaviour usually refer to their mental states, such as beliefs or intentions. Such a predictive strategy is called adoption of the intentional stance. In this paper, we review literature related to the concept of intentional stance from the perspectives of philosophy, psychology, human development, culture and human-robot interaction. We propose that adopting the intentional stance might be a central factor in facilitating social attunement with artificial agents. The paper first reviews the theoretical considerations regarding the intentional stance, and examines literature related to the development of intentional stance across the life span. Subsequently, it discusses cultural norms as grounded in the intentional stance and finally, it focuses on the issue of adopting the intentional stance towards artificial agents, such as humanoid robots. At the dawn of the artificial intelligence era, the question of how (and when) we predict and explain robots’ behaviour by referring to mental states is of high interest. The paper concludes with the discussion of the ethical consequences of robots towards which we adopt the intentional stance, and sketches future directions in research on this topic.


2019 ◽  
Author(s):  
Elef Schellen ◽  
Agnieszka Wykowska

Natural and effective interaction with humanoid robots should involve social cognitive mechanisms of the human brain that normally facilitate social interaction between humans. Recent research has indicated that the presence and efficiency of these mechanisms in human-robot interaction (HRI) might be contingent on the adoption of a set of attitudes, mindsets and beliefs concerning the robot’s inner machinery. Current research is investigating the factors that influence these mindsets, and how they affect HRI. This review focuses on a specific mindset, namely the “intentional mindset” in which intentionality is attributed to another agent. More specifically, we focus on the concept of adopting the intentional stance towards robots, i.e., the tendency to predict and explain the robots’ behavior with reference to mental states. We discuss the relationship between adoption of intentional stance and lower-level mechanisms of social cognition, and we provide a critical evaluation of research methods currently employed in this field, highlighting common pitfalls in the measurement of attitudes and mindsets.


Author(s):  
Patrick Gravell

Emergency Medical Services (EMS) response time to motor vehicle crashes (MVC’s) have been studied to determine if reducing the individual components of EMS response time (notification, arrival at the crash scene, and hospital arrival) may affect survival rates. It has been proposed that a reduction to 1 and 15- minute EMS notification and arrival times at the crash would result in 1.84% and 5.2% fewer fatalities. The aim of this study was to analyze the changes in EMS response times (notification, arrival at the crash scene, and hospital arrival) over the past three decades, both individually and overall. An important change in the past three decades is the increased use of cellular phones. Therefore, we hypothesized that EMSnotification time would have decreased over the timeframe, yielding an overall decrease in EMS response time. Our data are based on the Fatal Accident Reporting System (FARS) using the variables: Time of Crash, EMS Notification Time, EMS Arrival Time, EMS Hospital Arrival Time. This gives a total of 248,981 valid cases following the implementation of our inclusion criteria and truncation of the dataset to the 99th percentile to eliminate unexplainable outliers. We computed the individual and overall median EMS response times for each year from 1987 to 2015. Additionally, we analyzed the response times based on four separate crash factors: weather, total vehicles involved, time of day, and state population density. From 1987 to 2015 the individual EMS response times changed; while notification time has decreased, the arrival at both crash scene and hospital have steadily increased, resulting in overall increased total EMS response time.


Author(s):  
Jeakweon Han ◽  
Dennis Hong

Besides the difficulties in control and gait generation, designing a full-sized (taller than 1.3m) bipedal humanoid robot that can walk with two legs is a very challenging task, mainly due to the large torque requirements at the joints combined with the need for the actuators’ size and weight to be small. Most of the handful of successful humanoid robots in this size class that exist today utilize harmonic drives for gear reduction to gain high torque in a compact package. However, this makes the cost of such a robot too high and thus puts it out of reach of most of those who want to use it for general research, education and outreach activities. Besides the cost, the heavy weight of the robot also causes difficulties in handling and raises concerns for safety. In this paper we present the design of a new class of full-sized bipedal humanoid robots that is lightweight and low cost. This is achieved by utilizing spring assisted parallel four-bar linkages with synchronized actuation in the lower body to reduce the torque requirements of the individual actuators which also enables the use of off the shelf components to further reduce the cost significantly. The resulting savings in weight not only makes the operation of the robot safer, but also allows it to forgo the expensive force/torque sensors at the ankles and achieve stable bipedal walking only using the feedback from the IMU (Inertial Measurement Unit.) CHARLI-L (Cognitive Humanoid Autonomous Robot with Learning Intelligence - Lightweight) is developed using this approach and successfully demonstrated untethered bipedal locomotion using ZMP (Zero Moment Point) based control, stable omnidirectional gaits, and carrying out tasks autonomously using vision based localization.


2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Serena Marchesi ◽  
Davide Ghiglino ◽  
Melis Ince ◽  
Agnieszka Wykowska

Expectations about other’s behavior based on mental states modulate the way we interact with people. On the brink of the introduction of robots in our social environment, the question of whether humans would use the same strategy when interacting with artificial agents gain relevance. Recent research shows that people can adopt the mentalistic statement to explain the behavior of humanoid robots [1]. Adopting such a strategy might be mediated by the expectations that people have about robots and technology, among others. The present study aims to create a questionnaire to evaluate such expectations and to test whether these priors in fact modulate the adoption of the intentional stance. We found that people’s expectations directed to a particular robot platform have an influence on the adoption of mental state based explanations regarding an artificial agent. Lower expectations were associated with anxiety during interaction with robots and neuroticism. Meanwhile, high expectations are linked to feeling less discomfort when interacting with robots and a higher degree of openness. Our findings suggest that platform-directed expectations might also play a crucial role in HRI and in the adoption of intentional stance toward artificial agents.


Sign in / Sign up

Export Citation Format

Share Document