Agency and rationality: Adopting the intentional stance toward evolved virtual agents.

Decision ◽  
2016 ◽  
Vol 3 (1) ◽  
pp. 40-53 ◽  
Author(s):  
Peter C. Pantelis ◽  
Timothy Gerstner ◽  
Kevin Sanik ◽  
Ari Weinstein ◽  
Steven A. Cholewiak ◽  
...  
Author(s):  
Guglielmo Papagni ◽  
Sabine Koeszegi

AbstractArtificial agents are progressively becoming more present in everyday-life situations and more sophisticated in their interaction affordances. In some specific cases, like Google Duplex, GPT-3 bots or Deep Mind’s AlphaGo Zero, their capabilities reach or exceed human levels. The use contexts of everyday life necessitate making such agents understandable by laypeople. At the same time, displaying human levels of social behavior has kindled the debate over the adoption of Dennett’s ‘intentional stance’. By means of a comparative analysis of the literature on robots and virtual agents, we defend the thesis that approaching these artificial agents ‘as if’ they had intentions and forms of social, goal-oriented rationality is the only way to deal with their complexity on a daily base. Specifically, we claim that this is the only viable strategy for non-expert users to understand, predict and perhaps learn from artificial agents’ behavior in everyday social contexts. Furthermore, we argue that as long as agents are transparent about their design principles and functionality, attributing intentions to their actions is not only essential, but also ethical. Additionally, we propose design guidelines inspired by the debate over the adoption of the intentional stance.


2021 ◽  
Author(s):  
Lorenzo Parenti ◽  
Serena Marchesi ◽  
Marwen Belkaid ◽  
Agnieszka Wykowska

Understanding how and when humans attribute intentionality to artificial agents is a key issue in contemporary human and technological sciences. This paper addresses the question of whether adopting intentional stance can be modulated by exposure to a 3D animated robot character, and whether this depends on the human-likeness of the character's behavior. We report three experiments investigating how appearance and behavioral features of a virtual character affect humans’ attribution of intentionality toward artificial social agents. The results show that adoption of intentional stance can be modulated depending on participants' expectations about the agent. This study brings attention to specific features of virtual agents and insights for further work in the field of virtual interaction.


2020 ◽  
Vol 43 ◽  
Author(s):  
Hannes Rakoczy

Abstract The natural history of our moral stance told here in this commentary reveals the close nexus of morality and basic social-cognitive capacities. Big mysteries about morality thus transform into smaller and more manageable ones. Here, I raise questions regarding the conceptual, ontogenetic, and evolutionary relations of the moral stance to the intentional and group stances and to shared intentionality.


Author(s):  
David Rosenthal

Dennett’s account of consciousness starts from third-person considerations. I argue this is wise, since beginning with first-person access precludes accommodating the third-person access we have to others’ mental states. But Dennett’s first-person operationalism, which seeks to save the first person in third-person, operationalist terms, denies the occurrence of folk-psychological states that one doesn’t believe oneself to be in, and so the occurrence of folk-psychological states that aren’t conscious. This conflicts with Dennett’s intentional-stance approach to the mental, on which we discern others’ mental states independently of those states’ being conscious. We can avoid this conflict with a higher-order theory of consciousness, which saves the spirit of Dennett’s approach, but enables us to distinguish conscious folk-psychological states from nonconscious ones. The intentional stance by itself can’t do this, since it can’t discern a higher-order awareness of a psychological state. But we can supplement the intentional stance with the higher-order theoretical apparatus.


2021 ◽  
pp. 089443932110068
Author(s):  
Aleksandra Urman ◽  
Mykola Makhortykh ◽  
Roberto Ulloa

We examine how six search engines filter and rank information in relation to the queries on the U.S. 2020 presidential primary elections under the default—that is nonpersonalized—conditions. For that, we utilize an algorithmic auditing methodology that uses virtual agents to conduct large-scale analysis of algorithmic information curation in a controlled environment. Specifically, we look at the text search results for “us elections,” “donald trump,” “joe biden,” “bernie sanders” queries on Google, Baidu, Bing, DuckDuckGo, Yahoo, and Yandex, during the 2020 primaries. Our findings indicate substantial differences in the search results between search engines and multiple discrepancies within the results generated for different agents using the same search engine. It highlights that whether users see certain information is decided by chance due to the inherent randomization of search results. We also find that some search engines prioritize different categories of information sources with respect to specific candidates. These observations demonstrate that algorithmic curation of political information can create information inequalities between the search engine users even under nonpersonalized conditions. Such inequalities are particularly troubling considering that search results are highly trusted by the public and can shift the opinions of undecided voters as demonstrated by previous research.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Niklas Rach ◽  
Klaus Weber ◽  
Yuchi Yang ◽  
Stefan Ultes ◽  
Elisabeth André ◽  
...  

Abstract Persuasive argumentation depends on multiple aspects, which include not only the content of the individual arguments, but also the way they are presented. The presentation of arguments is crucial – in particular in the context of dialogical argumentation. However, the effects of different discussion styles on the listener are hard to isolate in human dialogues. In order to demonstrate and investigate various styles of argumentation, we propose a multi-agent system in which different aspects of persuasion can be modelled and investigated separately. Our system utilizes argument structures extracted from text-based reviews for which a minimal bias of the user can be assumed. The persuasive dialogue is modelled as a dialogue game for argumentation that was motivated by the objective to enable both natural and flexible interactions between the agents. In order to support a comparison of factual against affective persuasion approaches, we implemented two fundamentally different strategies for both agents: The logical policy utilizes deep Reinforcement Learning in a multi-agent setup to optimize the strategy with respect to the game formalism and the available argument. In contrast, the emotional policy selects the next move in compliance with an agent emotion that is adapted to user feedback to persuade on an emotional level. The resulting interaction is presented to the user via virtual avatars and can be rated through an intuitive interface.


Sign in / Sign up

Export Citation Format

Share Document