scholarly journals A Systematic Approach for Including Machine Learning in Multi-agent Systems

Author(s):  
José A. R. P. Sardinha ◽  
Alessandro Garcia ◽  
Carlos J. P. Lucena ◽  
Ruy L. Milidiú
2019 ◽  
Vol 3 (2) ◽  
pp. 21 ◽  
Author(s):  
David Manheim

An important challenge for safety in machine learning and artificial intelligence systems is a set of related failures involving specification gaming, reward hacking, fragility to distributional shifts, and Goodhart’s or Campbell’s law. This paper presents additional failure modes for interactions within multi-agent systems that are closely related. These multi-agent failure modes are more complex, more problematic, and less well understood than the single-agent case, and are also already occurring, largely unnoticed. After motivating the discussion with examples from poker-playing artificial intelligence (AI), the paper explains why these failure modes are in some senses unavoidable. Following this, the paper categorizes failure modes, provides definitions, and cites examples for each of the modes: accidental steering, coordination failures, adversarial misalignment, input spoofing and filtering, and goal co-option or direct hacking. The paper then discusses how extant literature on multi-agent AI fails to address these failure modes, and identifies work which may be useful for the mitigation of these failure modes.


Author(s):  
Nicolas Verstaevel ◽  
Jérémy Boes ◽  
Julien Nigon ◽  
Dorian d'Amico ◽  
Marie-Pierre Gleizes

Author(s):  
Daniel Kudenko ◽  
Dimitar Kazakov ◽  
Eduardo Alonso

In order to be truly autonomous, agents need the ability to learn from and adapt to the environment and other agents. This chapter introduces key concepts of machine learning and how they apply to agent and multi-agent systems. Rather than present a comprehensive survey, we discuss a number of issues that we believe are important in the design of learning agents and multi-agent systems. Specifically, we focus on the challenges involved in adapting (originally disembodied) machine learning techniques to situated agents, the relationship between learning and communication, learning to collaborate and compete, learning of roles, evolution and natural selection, and distributed learning. In the second part of the chapter, we focus on some practicalities and present two case studies.


Author(s):  
Daniel Kudenko ◽  
Dimitar Kazakov ◽  
Eduardo Alonso

In order to be truly autonomous, agents need the ability to learn from and adapt to the environment and other agents. This chapter introduces key concepts of machine learning and how they apply to agent and multi-agent systems. Rather than present a comprehensive survey, we discuss a number of issues that we believe are important in the design of learning agents and multi-agent systems. Specifically, we focus on the challenges involved in adapting (originally disembodied) machine learning techniques to situated agents, the relationship between learning and communication, learning to collaborate and compete, learning of roles, evolution and natural selection, and distributed learning. In the second part of the chapter, we focus on some practicalities and present two case studies.


Author(s):  
Valentina Plekhanova

Traditionally multi-agent learning is considered as the intersection of two subfields of artificial intelligence: multi-agent systems and machine learning. Conventional machine learning involves a single agent that is trying to maximise some utility function without any awareness of existence of other agents in the environment (Mitchell, 1997). Meanwhile, multi-agent systems consider mechanisms for the interaction of autonomous agents. Learning system is defined as a system where an agent learns to interact with other agents (e.g., Clouse, 1996; Crites & Barto, 1998; Parsons, Wooldridge & Amgoud, 2003). There are two problems that agents need to overcome in order to interact with each other to reach their individual or shared goals: since agents can be available/unavailable (i.e., they might appear and/or disappear at any time), they must be able to find each other, and they must be able to interact (Jennings, Sycara & Wooldridge, 1998).


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5198
Author(s):  
Emilio Serrano ◽  
Javier Bajo

The agent paradigm and multi-agent systems are a perfect match for the design of smart cities because of some of their essential features such as decentralization, openness, and heterogeneity. However, these major advantages also come at a great cost. Since agents’ mental states are hidden when the implementation is not known and available, intelligent services of smart cities cannot leverage information from them. We contribute with a proposal for the analysis and prediction of hidden agents’ mental states in a multi-agent system using machine learning methods that learn from past agents’ interactions. The approach employs agent communication languages, which is a core property of these multi-agent systems, to infer theories and models about agents’ mental states that are not accessible in an open system. These mental state models can be used on their own or combined to build protocol models, allowing agents (and their developers) to predict future agents’ behavior for various tasks such as testing and debugging them or making communications more efficient, which is essential in an ambient intelligence environment. This paper’s main contribution is to explore the problem of building these agents’ mental state models not from one, but from several interaction protocols, even when the protocols could have different purposes and provide distinct ambient intelligence services.


Sign in / Sign up

Export Citation Format

Share Document