scholarly journals Don’t overthink: fast decision making combined with behavior variability perceived as more human-like

2020 ◽  
Author(s):  
Serena Marchesi ◽  
Jairo Pérez-Osorio ◽  
Davide De Tommaso ◽  
Agnieszka Wykowska

Understanding the human cognitive processes involved in the interaction with artificial agents is crucial for designing socially capable robots. During social interactions, humans tend to explain and predict others’ behavior adopting the intentional stance, that is, assuming that mental states drive behavior. However, the question of whether humans would adopt the same strategy with artificial agents remains unanswered. The present study aimed at identifying whether the type of behavior exhibited by the robot has an impact on the attribution of mentalistic explanations of behavior. We employed the Instance Questionnaire (ISQ) pre and post-observation of two types of behavior (decisive or hesitant). We found that decisive behavior, with rare and unexpected “hesitant” behaviors lead to more mentalistic attributions relative to behavior that was primarily hesitant. Findings suggest that higher expectations regarding the robots’ capabilities and the characteristics of the behavior might lead to more mentalistic descriptions.

2018 ◽  
Author(s):  
Serena Marchesi ◽  
Davide Ghiglino ◽  
Francesca Ciardo ◽  
Jairo Pérez-Osorio ◽  
Ebru Baykara ◽  
...  

In daily social interactions, we need to be able to navigate efficiently through our social environment. According to Dennett (1971), explaining and predicting others’ behaviour with reference to mental states (adopting the intentional stance) allows efficient social interaction. Today we also routinely interact with artificial agents: from Apple’s Siri to GPS navigation systems. In the near future, we might start casually interacting with robots. This paper addresses the question of whether adopting the intentional stance can also occur with respect to artificial agents. We propose a new tool to explore if people adopt the intentional stance towards an artificial agent (humanoid robot). The tool consists in a questionnaire that probes participants’ stance by requiring them to choose the likelihood of an explanation (mentalistic vs. mechanistic) of a behaviour of a robot iCub depicted in a naturalistic scenario (a sequence of photographs). The results of the first study conducted with this questionnaire showed that although the explanations were somewhat biased towards the mechanistic stance, a substantial number of mentalistic explanations were also given. This suggests that it is possible to induce adoption of the intentional stance towards artificial agents, at least in some contexts.


Author(s):  
Nathan Caruana ◽  
Dean Spirou ◽  
Jon Brock

In recent years, with the emergence of relatively inexpensive and accessible virtual reality technologies, it is now possible to deliver compelling and realistic simulations of human-to-human interaction. Neuroimaging studies have shown that, when participants believe they are interacting via a virtual interface with another human agent, they show different patterns of brain activity compared to when they know that their virtual partner is computer-controlled. The suggestion is that users adopt an “intentional stance” by attributing mental states to their virtual partner. However, it remains unclear how beliefs in the agency of a virtual partner influence participants’ behaviour and subjective experience of the interaction. We investigated this issue in the context of a cooperative “joint attention” game in which participants interacted via an eye tracker with a virtual onscreen partner, directing each other’s eye gaze to different screen locations. Half of the participants were correctly informed that their partner was controlled by a computer algorithm (“Computer” condition). The other half were misled into believing that the virtual character was controlled by a second participant in another room (“Human” condition). Those in the “Human” condition were slower to make eye contact with their partner and more likely to try and guide their partner before they had established mutual eye contact than participants in the “Computer” condition. They also responded more rapidly when their partner was guiding them, although the same effect was also found for a control condition in which they responded to an arrow cue. Results confirm the influence of human agency beliefs on behaviour in this virtual social interaction context. They further suggest that researchers and developers attempting to simulate social interactions should consider the impact of agency beliefs on user experience in other social contexts, and their effect on the achievement of the application’s goals.


2017 ◽  
Author(s):  
Erdem Pulcu ◽  
Masahiko Haruno

AbstractInteracting with others to decide how finite resources should be allocated between parties which may have competing interests is an important part of social life. Considering that not all of our proposals to others are always accepted, the outcomes of such social interactions are, by their nature, probabilistic and risky. Here, we highlight cognitive processes related to value computations in human social interactions, based on mathematical modelling of the proposer behavior in the Ultimatum Game. Our results suggest that the perception of risk is an overarching process across non-social and social decision-making, whereas nonlinear weighting of others’ acceptance probabilities is unique to social interactions in which others’ valuation processes needs to be inferred. Despite the complexity of social decision-making, human participants make near-optimal decisions by dynamically adjusting their decision parameters to the changing social value orientation of their opponents through influence by multidimensional inferences they make about those opponents (e.g. how prosocial they think their opponent is relative to themselves).


2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Agnieszka Wykowska

In our daily lives, we need to predict and understand others’ behaviour in order to navigate through our social environment. Predictions concerning other humans’ behaviour usually refer to their mental states, such as beliefs or intentions. Such a predictive strategy is called adoption of the intentional stance. In this paper, we review literature related to the concept of intentional stance from the perspectives of philosophy, psychology, human development, culture and human-robot interaction. We propose that adopting the intentional stance might be a central factor in facilitating social attunement with artificial agents. The paper first reviews the theoretical considerations regarding the intentional stance, and examines literature related to the development of intentional stance across the life span. Subsequently, it discusses cultural norms as grounded in the intentional stance and finally, it focuses on the issue of adopting the intentional stance towards artificial agents, such as humanoid robots. At the dawn of the artificial intelligence era, the question of how (and when) we predict and explain robots’ behaviour by referring to mental states is of high interest. The paper concludes with the discussion of the ethical consequences of robots towards which we adopt the intentional stance, and sketches future directions in research on this topic.


2020 ◽  
Vol 22 ◽  
pp. 01022
Author(s):  
Felix Zakirov ◽  
Arsenty Krasilnikov

During aging cognitive functions change differently from others. Unlike most of the body systems, there is no clear decline pattern in cognitive processes. One of the most significant cognitive processes is decision-making, which defines social interactions, economical relationships, and risky behavior. Among factors influence decisionmaking process, individual lifelong experience is considered to be an important one. Obviously, older adults have more life experience, than the younger groups. However, the former often do not tend to rational choices and beneficial strategies. In this case it is important to assess how aging processes in brain contribute into searching for the most beneficial option during decision-making. On the basis of today’s studies about risky behavior, judgement of fairness, financial games, and modern neuroimaging data this review will observe and discuss age-related differences in decision-making. Thus, a correct cognitive profile of older adult in decision-making context can be determined.


2021 ◽  
Author(s):  
A. Myznikov ◽  
M. Zheltyakova ◽  
A. Korotkov ◽  
M. Kireev ◽  
R. Masharipov ◽  
...  

AbstractSocial interactions are a crucial aspect of human behaviour. Numerous neurophysiological studies have focused on socio-cognitive processes associated with the so-called theory of mind—the ability to attribute mental states to oneself and others. Theory of mind is closely related to social intelligence defined as a set of abilities that facilitate effective social interactions. Social intelligence encompasses multiple theory of mind components and can be measured by the Four Factor Test of Social Intelligence (the Guilford-Sullivan test). However, it is unclear whether the differences in social intelligence are reflected in structural brain differences. During the experiment, 48 healthy right-handed individuals completed the Guilford-Sullivan test. T1-weighted structural MRI images were obtained for all participants. Voxel-based morphometry analysis was performed to reveal grey matter volume differences between the two groups (24 subjects in each)—with high social intelligence scores and with low social intelligence scores, respectively. Participants with high social intelligence scores had larger grey matter volumes of the bilateral caudate. The obtained results suggest the caudate nucleus involvement in the neural system of socio-cognitive processes, reflected by its structural characteristics.


2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Serena Marchesi ◽  
Davide Ghiglino ◽  
Melis Ince ◽  
Agnieszka Wykowska

Expectations about other’s behavior based on mental states modulate the way we interact with people. On the brink of the introduction of robots in our social environment, the question of whether humans would use the same strategy when interacting with artificial agents gain relevance. Recent research shows that people can adopt the mentalistic statement to explain the behavior of humanoid robots [1]. Adopting such a strategy might be mediated by the expectations that people have about robots and technology, among others. The present study aims to create a questionnaire to evaluate such expectations and to test whether these priors in fact modulate the adoption of the intentional stance. We found that people’s expectations directed to a particular robot platform have an influence on the adoption of mental state based explanations regarding an artificial agent. Lower expectations were associated with anxiety during interaction with robots and neuroticism. Meanwhile, high expectations are linked to feeling less discomfort when interacting with robots and a higher degree of openness. Our findings suggest that platform-directed expectations might also play a crucial role in HRI and in the adoption of intentional stance toward artificial agents.


2019 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Agnieszka Wykowska

On the day by day humans need to predict and understand others’ behavior in order to efficiently navigate through our social environment. When making predictions about what others are going to do next, we refer to their mental states, such as beliefs or intentions. At the dawn of a new era, in which robots will be among us at homes and offices, one needs to ask whether (or when) we predict and also explain robots’ behavior with reference to mental states. In other words, do we adopt the intentional stance (Dennett in The Intentional Stance. MIT Press, Cambridge (1987) [1]) also towards artificial agents—especially those with humanoid shape and human-like behavior? What plays a role in adopting the intentional stance towards robots? Does adopting an intentional stance affect our social attunement with artificial agents? In this chapter, we first discuss the general approach that we take towards examining these questions—using objective methods of cognitive neuroscience to test social attunement as a function of adopting the intentional stance. Also, we describe our newly developed method to examine whether participants adopt the intentional stance towards an artificial agent. The chapter concludes with an outlook to the questions that still need to be addressed, like ethical consequences and societal impact of robots with which we attune socially, and towards which we adopt the intentional stance.


2021 ◽  
Vol 8 ◽  
Author(s):  
Serena Marchesi ◽  
Francesco Bossi ◽  
Davide Ghiglino ◽  
Davide De Tommaso ◽  
Agnieszka Wykowska

The presence of artificial agents in our everyday lives is continuously increasing. Hence, the question of how human social cognition mechanisms are activated in interactions with artificial agents, such as humanoid robots, is frequently being asked. One interesting question is whether humans perceive humanoid robots as mere artifacts (interpreting their behavior with reference to their function, thereby adopting the design stance) or as intentional agents (interpreting their behavior with reference to mental states, thereby adopting the intentional stance). Due to their humanlike appearance, humanoid robots might be capable of evoking the intentional stance. On the other hand, the knowledge that humanoid robots are only artifacts should call for adopting the design stance. Thus, observing a humanoid robot might evoke a cognitive conflict between the natural tendency of adopting the intentional stance and the knowledge about the actual nature of robots, which should elicit the design stance. In the present study, we investigated the cognitive conflict hypothesis by measuring participants’ pupil dilation during the completion of the InStance Test. Prior to each pupillary recording, participants were instructed to observe the humanoid robot iCub behaving in two different ways (either machine-like or humanlike behavior). Results showed that pupil dilation and response time patterns were predictive of individual biases in the adoption of the intentional or design stance in the IST. These results may suggest individual differences in mental effort and cognitive flexibility in reading and interpreting the behavior of an artificial agent.


2020 ◽  
Author(s):  
A Myznikov ◽  
M Zheltyakova ◽  
A Korotkov ◽  
M Kireev ◽  
R Masharipov ◽  
...  

AbstractSocial interactions are a crucial aspect of human behaviour. Numerous neurophysiological studies have focused on socio-cognitive processes associated with the so-called theory of mind – the ability to attribute mental states to oneself and others. Theory of mind is closely related to social intelligence defined as a set of abilities that facilitate effective social interactions. Social intelligence encompasses multiple theory of mind components and can be measured by the Four Factor Test of Social Intelligence (the Guilford-Sullivan test). However, it is unclear whether the differences in social intelligence are reflected in structural brain differences. During the experiment, 48 healthy right-handed individuals completed the Guilford-Sullivan test. T1-weighted structural MRI images were obtained for all participants. Voxel-based morphometry analysis was performed to reveal grey matter volume differences between the two groups (24 subjects in each) – with high social intelligence scores and with low social intelligence scores, respectively. Participants with high social intelligence scores had larger grey matter volumes of the bilateral caudate, left insula, left inferior parietal lobule, inferior temporal gyrus, and middle occipital gyrus. Only the cluster in the caudate nuclei survived a cluster-level FWE correction for multiple comparisons. The obtained results suggest caudate nucleus involvement in the neural system of socio-cognitive processes, reflected by its structural characteristics.


Sign in / Sign up

Export Citation Format

Share Document