scholarly journals Multiparty Dynamics and Failure Modes for Machine Learning and Artificial Intelligence

2019 ◽  
Vol 3 (2) ◽  
pp. 21 ◽  
Author(s):  
David Manheim

An important challenge for safety in machine learning and artificial intelligence systems is a set of related failures involving specification gaming, reward hacking, fragility to distributional shifts, and Goodhart’s or Campbell’s law. This paper presents additional failure modes for interactions within multi-agent systems that are closely related. These multi-agent failure modes are more complex, more problematic, and less well understood than the single-agent case, and are also already occurring, largely unnoticed. After motivating the discussion with examples from poker-playing artificial intelligence (AI), the paper explains why these failure modes are in some senses unavoidable. Following this, the paper categorizes failure modes, provides definitions, and cites examples for each of the modes: accidental steering, coordination failures, adversarial misalignment, input spoofing and filtering, and goal co-option or direct hacking. The paper then discusses how extant literature on multi-agent AI fails to address these failure modes, and identifies work which may be useful for the mitigation of these failure modes.

Author(s):  
Valentina Plekhanova

Traditionally multi-agent learning is considered as the intersection of two subfields of artificial intelligence: multi-agent systems and machine learning. Conventional machine learning involves a single agent that is trying to maximise some utility function without any awareness of existence of other agents in the environment (Mitchell, 1997). Meanwhile, multi-agent systems consider mechanisms for the interaction of autonomous agents. Learning system is defined as a system where an agent learns to interact with other agents (e.g., Clouse, 1996; Crites & Barto, 1998; Parsons, Wooldridge & Amgoud, 2003). There are two problems that agents need to overcome in order to interact with each other to reach their individual or shared goals: since agents can be available/unavailable (i.e., they might appear and/or disappear at any time), they must be able to find each other, and they must be able to interact (Jennings, Sycara & Wooldridge, 1998).


2011 ◽  
pp. 1429-1438
Author(s):  
Valentina Plekhanova

Traditionally multi-agent learning is considered as the intersection of two subfields of artificial intelligence: multi-agent systems and machine learning. Conventional machine learning involves a single agent that is trying to maximise some utility function without any awareness of existence of other agents in the environment (Mitchell, 1997). Meanwhile, multi-agent systems consider mechanisms for the interaction of autonomous agents. Learning system is defined as a system where an agent learns to interact with other agents (e.g., Clouse, 1996; Crites & Barto, 1998; Parsons, Wooldridge & Amgoud, 2003). There are two problems that agents need to overcome in order to interact with each other to reach their individual or shared goals: since agents can be available/unavailable (i.e., they might appear and/or disappear at any time), they must be able to find each other, and they must be able to interact (Jennings, Sycara & Wooldridge, 1998).


Author(s):  
Mehdi Dastani ◽  
Paolo Torroni ◽  
Neil Yorke-Smith

AbstractThe concept of anormis found widely across fields including artificial intelligence, biology, computer security, cultural studies, economics, law, organizational behaviour and psychology. The concept is studied with different terminology and perspectives, including individual, social, legal and philosophical. If a norm is an expected behaviour in a social setting, then this article considers how it can be determined whether an individual is adhering to this expected behaviour. We call this processmonitoring, and again it is a concept known with different terminology in different fields. Monitoring of norms is foundational for processes of accountability, enforcement, regulation and sanctioning. Starting with a broad focus and narrowing to the multi-agent systems literature, this survey addresses four key questions: what is monitoring, what is monitored, who does the monitoring and how the monitoring is accomplished.


Author(s):  
Chengzhi Yuan

This paper addresses the problem of leader-following consensus control of general linear multi-agent systems (MASs) with diverse time-varying input delays under the integral quadratic constraint (IQC) framework. A novel exact-memory distributed output-feedback delay controller structure is proposed, which utilizes not only relative estimation state information from neighboring agents but also local real-time information of time delays and the associated dynamic IQC-induced states from the agent itself for feedback control. As a result, the distributed consensus problem can be decomposed into H∞ stabilization subproblems for a set of independent linear fractional transformation (LFT) systems, whose dimensions are equal to that of a single agent plant plus the associated local IQC dynamics. New delay control synthesis conditions for each subproblem are fully characterized as linear matrix inequalities (LMIs). A numerical example is used to demonstrate the proposed approach.


Author(s):  
José A. R. P. Sardinha ◽  
Alessandro Garcia ◽  
Carlos J. P. Lucena ◽  
Ruy L. Milidiú

AI Magazine ◽  
2018 ◽  
Vol 39 (4) ◽  
pp. 29-35
Author(s):  
Christopher Amato ◽  
Haitham Bou Ammar ◽  
Elizabeth Churchill ◽  
Erez Karpas ◽  
Takashi Kido ◽  
...  

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University’s Department of Computer Science, presented the 2018 Spring Symposium Series, held Monday through Wednesday, March 26–28, 2018, on the campus of Stanford University. The seven symposia held were AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents; Artificial Intelligence for the Internet of Everything; Beyond Machine Intelligence: Understanding Cognitive Bias and Humanity for Well-Being AI; Data Efficient Reinforcement Learning; The Design of the User Experience for Artificial Intelligence (the UX of AI); Integrated Representation, Reasoning, and Learning in Robotics; Learning, Inference, and Control of Multi-Agent Systems. This report, compiled from organizers of the symposia, summarizes the research of five of the symposia that took place.


2001 ◽  
Vol 16 (3) ◽  
pp. 277-284 ◽  
Author(s):  
EDUARDO ALONSO ◽  
MARK D'INVERNO ◽  
DANIEL KUDENKO ◽  
MICHAEL LUCK ◽  
JASON NOBLE

In recent years, multi-agent systems (MASs) have received increasing attention in the artificial intelligence community. Research in multi-agent systems involves the investigation of autonomous, rational and flexible behaviour of entities such as software programs or robots, and their interaction and coordination in such diverse areas as robotics (Kitano et al., 1997), information retrieval and management (Klusch, 1999), and simulation (Gilbert & Conte, 1995). When designing agent systems, it is impossible to foresee all the potential situations an agent may encounter and specify an agent behaviour optimally in advance. Agents therefore have to learn from, and adapt to, their environment, especially in a multi-agent setting.


Sign in / Sign up

Export Citation Format

Share Document