Session details: Volume I: Artificial intelligence and agents, distributed systems, and information systems: Intelligent robotics and multi-agent systems track

Author(s):  
Rui P. Rocha ◽  
Christopher D. Kiekintveld ◽  
M. Ani Hsieh

Author(s):  
Mehdi Dastani ◽  
Paolo Torroni ◽  
Neil Yorke-Smith

AbstractThe concept of anormis found widely across fields including artificial intelligence, biology, computer security, cultural studies, economics, law, organizational behaviour and psychology. The concept is studied with different terminology and perspectives, including individual, social, legal and philosophical. If a norm is an expected behaviour in a social setting, then this article considers how it can be determined whether an individual is adhering to this expected behaviour. We call this processmonitoring, and again it is a concept known with different terminology in different fields. Monitoring of norms is foundational for processes of accountability, enforcement, regulation and sanctioning. Starting with a broad focus and narrowing to the multi-agent systems literature, this survey addresses four key questions: what is monitoring, what is monitored, who does the monitoring and how the monitoring is accomplished.



Author(s):  
L. Shan ◽  
R. Shen ◽  
J. Wang

Based on the meta-model of information systems presented in Zhu (2006), this chapter presents a caste-centric agent-oriented methodology for evolutionary and collaborative development of information systems. It consists of a process model called growth model, and a set of agent-oriented languages and software tools that support various development activities in the process. At the requirements analysis phase, a modelling language and environment called CAMLE supports the analysis and design of information systems. The semi-formal models in CAMLE can be automatically transformed into formal specifications in SLABS, which is a formal specification language designed for formal engineering of multi-agent systems. At implementation, agent-oriented information systems are implemented directly in an agent-oriented programming language called SLABSp. The features of agent-oriented information systems in general and our methodology in particular are illustrated by an example throughout the chapter.



AI Magazine ◽  
2018 ◽  
Vol 39 (4) ◽  
pp. 29-35
Author(s):  
Christopher Amato ◽  
Haitham Bou Ammar ◽  
Elizabeth Churchill ◽  
Erez Karpas ◽  
Takashi Kido ◽  
...  

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University’s Department of Computer Science, presented the 2018 Spring Symposium Series, held Monday through Wednesday, March 26–28, 2018, on the campus of Stanford University. The seven symposia held were AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents; Artificial Intelligence for the Internet of Everything; Beyond Machine Intelligence: Understanding Cognitive Bias and Humanity for Well-Being AI; Data Efficient Reinforcement Learning; The Design of the User Experience for Artificial Intelligence (the UX of AI); Integrated Representation, Reasoning, and Learning in Robotics; Learning, Inference, and Control of Multi-Agent Systems. This report, compiled from organizers of the symposia, summarizes the research of five of the symposia that took place.



2019 ◽  
Vol 3 (2) ◽  
pp. 21 ◽  
Author(s):  
David Manheim

An important challenge for safety in machine learning and artificial intelligence systems is a set of related failures involving specification gaming, reward hacking, fragility to distributional shifts, and Goodhart’s or Campbell’s law. This paper presents additional failure modes for interactions within multi-agent systems that are closely related. These multi-agent failure modes are more complex, more problematic, and less well understood than the single-agent case, and are also already occurring, largely unnoticed. After motivating the discussion with examples from poker-playing artificial intelligence (AI), the paper explains why these failure modes are in some senses unavoidable. Following this, the paper categorizes failure modes, provides definitions, and cites examples for each of the modes: accidental steering, coordination failures, adversarial misalignment, input spoofing and filtering, and goal co-option or direct hacking. The paper then discusses how extant literature on multi-agent AI fails to address these failure modes, and identifies work which may be useful for the mitigation of these failure modes.



2001 ◽  
Vol 16 (3) ◽  
pp. 277-284 ◽  
Author(s):  
EDUARDO ALONSO ◽  
MARK D'INVERNO ◽  
DANIEL KUDENKO ◽  
MICHAEL LUCK ◽  
JASON NOBLE

In recent years, multi-agent systems (MASs) have received increasing attention in the artificial intelligence community. Research in multi-agent systems involves the investigation of autonomous, rational and flexible behaviour of entities such as software programs or robots, and their interaction and coordination in such diverse areas as robotics (Kitano et al., 1997), information retrieval and management (Klusch, 1999), and simulation (Gilbert & Conte, 1995). When designing agent systems, it is impossible to foresee all the potential situations an agent may encounter and specify an agent behaviour optimally in advance. Agents therefore have to learn from, and adapt to, their environment, especially in a multi-agent setting.



Sign in / Sign up

Export Citation Format

Share Document