scholarly journals Some Forms of Collectively Bringing About or ‘Seeing to it that’

Author(s):  
Marek Sergot

AbstractOne of the best known approaches to the logic of agency are the ‘stit’ (‘seeing to it that’) logics. Often, it is not the actions of an individual agent that bring about a certain outcome but the joint actions of a set of agents, collectively. Collective agency has received comparatively little attention in ‘stit’. The paper maps out several different forms, several different senses in which a particular set of agents, collectively, can be said to bring about a certain outcome, and examines how these forms can be expressed in ‘stit’ and stit-like logics. The outcome that is brought about may be unintentional, and perhaps even accidental; the account deliberately ignores aspects such as joint intention, communication between agents, awareness of other agents’ intentions and capabilities, even the awareness of another agent’s existence. The aim is to investigate what can be said about collective agency when all such considerations are ignored, besides mere consequences of joint actions. The account will be related to the ‘strictly stit’ of Belnap and Perloff (Annals of Mathematics and Artificial Intelligence 9(1–2), 25–48 1993) and their suggestions concerning ‘inessential members’ and ‘mere bystanders’. We will adjust some of those conjectures and distinguish further between ‘potentially contributing bystanders’ and ‘impotent bystanders’.

Author(s):  
Nicholas Mattei ◽  
Paolo Turrini ◽  
Stanislav Zhydkov

In peer selection agents must choose a subset of themselves for an award or a prize. As agents are self-interested, we want to design algorithms that are impartial, so that an individual agent cannot affect their own chance of being selected. This problem has broad application in resource allocation and mechanism design and has received substantial attention in the artificial intelligence literature. Here, we present a novel algorithm for impartial peer selection, PeerNomination, and provide a theoretical analysis of its accuracy. Our algorithm possesses various desirable features. In particular, it does not require an explicit partitioning of the agents, as previous algorithms in the literature. We show empirically that it achieves higher accuracy than the exiting algorithms over several metrics.


2021 ◽  
Author(s):  
Marine PAGLIARI ◽  
Valerian Chambon ◽  
Bruno Berberian

The introduction of automated systems, and more broadly of Artificial Intelligence (AI), into many domains has profoundly changed the nature of human activity, as well as the subjective experience that agents have of their own actions and their consequences – an experience that is commonly referred to as "sense of agency" (SoA). In this review, we propose to examine the empirical evidence supporting this impact of automation on individuals’ sense of agency, and hence on measures as diverse as operator performance, system explicability and acceptability. Because of some of its key characteristics, AI occupies a special status in the artificial systems landscape. We suggest that this status prompts us to reconsider human-AI interactions in the light of human-human relations. We therefore approach the study of joint actions in human social interactions to deduce what are the key features necessary to develop a reliable SoA in a social context. We suggest that the study of social interactions and the development of SoA in joint actions can help determine the content of relevant explanations to be implemented in AI to make it "explainable". Finally, we propose possible directions to improve human-AI interactions and, in particular, to restore the SoA of human operators, improve their confidence in the decisions made by artificial agents, and increase the acceptability of such agents.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Siyuan Ding ◽  
Shengxiang Li ◽  
Guangyi Liu ◽  
Ou Li ◽  
Ke Ke ◽  
...  

The exponential explosion of joint actions and massive data collection are two main challenges in multiagent reinforcement learning algorithms with centralized training. To overcome these problems, in this paper, we propose a model-free and fully decentralized actor-critic multiagent reinforcement learning algorithm based on message diffusion. To this end, the agents are assumed to be placed in a time-varying communication network. Each agent makes limited observations regarding the global state and joint actions; therefore, it needs to obtain and share information with others over the network. In the proposed algorithm, agents hold local estimations of the global state and joint actions and update them with local observations and the messages received from neighbors. Under the hypothesis of the global value decomposition, the gradient of the global objective function to an individual agent is derived. The convergence of the proposed algorithm with linear function approximation is guaranteed according to the stochastic approximation theory. In the experiments, the proposed algorithm was applied to a passive location task multiagent environment and achieved superior performance compared to state-of-the-art algorithms.


Author(s):  
David L. Poole ◽  
Alan K. Mackworth

Sign in / Sign up

Export Citation Format

Share Document