Bridging the Gap: On the Collaboration between Symbolic Interactionism and Distributed Artificial Intelligence in the Field of Multi-Agent Systems Research

1998 ◽  
Vol 21 (4) ◽  
pp. 441-463 ◽  
Author(s):  
Jörg Strübing
2021 ◽  
Author(s):  
Qin Yang

Distributed artificial intelligence (DAI) studies artificial intelligence entities working together to reason, plan, solve problems, organize behaviors and strategies, make collective decisions and learn. This Ph.D. research proposes a principled Multi-Agent Systems (MAS) cooperation framework -- Self-Adaptive Swarm System (SASS) -- to bridge the fourth level automation gap between perception, communication, planning, execution, decision-making, and learning.


Author(s):  
Cheng-Gang Bian ◽  
◽  
Wen Cao ◽  
Gunnar Hartvigsen

ViSe2 l is an expert consulting system which employs software agents to manage distributed knowledge sources. These individual software agents solve users’ problems either by themselves or via cooperation. The efficiency of cooperation plays a serious role in Distributed Problem Solving (DPS) and Multi-Agent Systems (MAS). We have focused on the development of a twin-base approach for agents to model the capabilities of each other, and thus achieve efficient cooperation. The current version of the ViSe2 implementation is an experimental model of an agent-based expert system. Compared with other cooperation approaches in Distributed Artificial Intelligence (DAI) area, the results received so far indicate that the ViSe2 agents serve their users in an efficient cooperation manner.


1993 ◽  
Vol 8 (3) ◽  
pp. 223-250 ◽  
Author(s):  
Nick R. Jennings

AbstractDistributed Artificial Intelligence systems, in which multiple agents interact to improve their individual performance and to enhance the systems' overall utility, are becoming an increasingly pervasive means of conceptualising a diverse range of applications. As the discipline matures, researchers are beginning to strive for the underlying theories and principles which guide the central processes of coordination and cooperation. Here agent communities are modelled using a distributed goal search formalism, and it is argued thatcommitments(pledges to undertake a specific course of action) andconventions(means of monitoring commitments in changing circumstances) are the foundation of coordination in multi-agent systems. An analysis of existing coordination models which use concepts akin to commitments and conventions is undertaken before a new unifying framework is presented. Finally, a number of prominent coordination techniques which do notexplicitlyinvolve commitments or conventions are reformulated in these terms to demonstrate their compliance with the central hypothesis of this paper.


Author(s):  
Mehdi Dastani ◽  
Paolo Torroni ◽  
Neil Yorke-Smith

AbstractThe concept of anormis found widely across fields including artificial intelligence, biology, computer security, cultural studies, economics, law, organizational behaviour and psychology. The concept is studied with different terminology and perspectives, including individual, social, legal and philosophical. If a norm is an expected behaviour in a social setting, then this article considers how it can be determined whether an individual is adhering to this expected behaviour. We call this processmonitoring, and again it is a concept known with different terminology in different fields. Monitoring of norms is foundational for processes of accountability, enforcement, regulation and sanctioning. Starting with a broad focus and narrowing to the multi-agent systems literature, this survey addresses four key questions: what is monitoring, what is monitored, who does the monitoring and how the monitoring is accomplished.


AI Magazine ◽  
2018 ◽  
Vol 39 (4) ◽  
pp. 29-35
Author(s):  
Christopher Amato ◽  
Haitham Bou Ammar ◽  
Elizabeth Churchill ◽  
Erez Karpas ◽  
Takashi Kido ◽  
...  

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University’s Department of Computer Science, presented the 2018 Spring Symposium Series, held Monday through Wednesday, March 26–28, 2018, on the campus of Stanford University. The seven symposia held were AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents; Artificial Intelligence for the Internet of Everything; Beyond Machine Intelligence: Understanding Cognitive Bias and Humanity for Well-Being AI; Data Efficient Reinforcement Learning; The Design of the User Experience for Artificial Intelligence (the UX of AI); Integrated Representation, Reasoning, and Learning in Robotics; Learning, Inference, and Control of Multi-Agent Systems. This report, compiled from organizers of the symposia, summarizes the research of five of the symposia that took place.


2019 ◽  
Vol 3 (2) ◽  
pp. 21 ◽  
Author(s):  
David Manheim

An important challenge for safety in machine learning and artificial intelligence systems is a set of related failures involving specification gaming, reward hacking, fragility to distributional shifts, and Goodhart’s or Campbell’s law. This paper presents additional failure modes for interactions within multi-agent systems that are closely related. These multi-agent failure modes are more complex, more problematic, and less well understood than the single-agent case, and are also already occurring, largely unnoticed. After motivating the discussion with examples from poker-playing artificial intelligence (AI), the paper explains why these failure modes are in some senses unavoidable. Following this, the paper categorizes failure modes, provides definitions, and cites examples for each of the modes: accidental steering, coordination failures, adversarial misalignment, input spoofing and filtering, and goal co-option or direct hacking. The paper then discusses how extant literature on multi-agent AI fails to address these failure modes, and identifies work which may be useful for the mitigation of these failure modes.


Sign in / Sign up

Export Citation Format

Share Document