scholarly journals Dialogue Games in Multi-Agent Systems

2001 ◽  
Vol 22 (3) ◽  
Author(s):  
Peter McBurney ◽  
Simon Parsons

Formal dialogue games have been studied in philosophy since at least the time of Aristotle. Recently they have been applied in various contexts in computer science and artificial intelligence, particularly as the basis for interaction between autonomous software agents. We review these applications and discuss the many open research questions and challenges at this exciting interface between philosophy and computer science.

2000 ◽  
Vol 15 (2) ◽  
pp. 197-203 ◽  
Author(s):  
RUTH AYLETT ◽  
KERSTIN DAUTENHAHN ◽  
JIM DORAN ◽  
MICHAEL LUCK ◽  
SCOTT MOSS ◽  
...  

One of the main reasons for the sustained activity and interest in the field of agent-based systems, apart from the obvious recognition of its value as a natural and intuitive way of understanding the world, is its reach into very many different and distinct fields of investigation. Indeed, the notions of agents and multi-agent systems are relevant to fields ranging from economics to robotics, in contributing to the foundations of the field, being influenced by ongoing research, and in providing many domains of application. While these various disciplines constitute a rich and diverse environment for agent research, the way in which they may have been linked by it is a much less considered issue. The purpose of this panel was to examine just this concern, in the relationships between different areas that have resulted from agent research. Informed by the experience of the participants in the areas of robotics, social simulation, economics, computer science and artificial intelligence, the discussion was lively and sometimes heated.


Author(s):  
Mehdi Dastani ◽  
Paolo Torroni ◽  
Neil Yorke-Smith

AbstractThe concept of anormis found widely across fields including artificial intelligence, biology, computer security, cultural studies, economics, law, organizational behaviour and psychology. The concept is studied with different terminology and perspectives, including individual, social, legal and philosophical. If a norm is an expected behaviour in a social setting, then this article considers how it can be determined whether an individual is adhering to this expected behaviour. We call this processmonitoring, and again it is a concept known with different terminology in different fields. Monitoring of norms is foundational for processes of accountability, enforcement, regulation and sanctioning. Starting with a broad focus and narrowing to the multi-agent systems literature, this survey addresses four key questions: what is monitoring, what is monitored, who does the monitoring and how the monitoring is accomplished.


2009 ◽  
pp. 2843-2864 ◽  
Author(s):  
Kostas Kolomvatsos ◽  
Stathes Hadjiefthymiades

The field of Multi-agent systems (MAS) has been an active area for many years due to the importance that agents have to many disciplines of research in computer science. MAS are open and dynamic systems where a number of autonomous software components, called agents, communicate and cooperate in order to achieve their goals. In such systems, trust plays an important role. There must be a way for an agent to make sure that it can trust another entity, which is a potential partner. Without trust, agents cannot cooperate effectively and without cooperation they cannot fulfill their goals. Many times, trust is based on reputation. It is an indication that we may trust someone. This important research area is investigated in this book chapter. We discuss main issues concerning reputation and trust in MAS. We present research efforts and give formalizations useful for understanding the two concepts.


AI Magazine ◽  
2018 ◽  
Vol 39 (4) ◽  
pp. 29-35
Author(s):  
Christopher Amato ◽  
Haitham Bou Ammar ◽  
Elizabeth Churchill ◽  
Erez Karpas ◽  
Takashi Kido ◽  
...  

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University’s Department of Computer Science, presented the 2018 Spring Symposium Series, held Monday through Wednesday, March 26–28, 2018, on the campus of Stanford University. The seven symposia held were AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents; Artificial Intelligence for the Internet of Everything; Beyond Machine Intelligence: Understanding Cognitive Bias and Humanity for Well-Being AI; Data Efficient Reinforcement Learning; The Design of the User Experience for Artificial Intelligence (the UX of AI); Integrated Representation, Reasoning, and Learning in Robotics; Learning, Inference, and Control of Multi-Agent Systems. This report, compiled from organizers of the symposia, summarizes the research of five of the symposia that took place.


2019 ◽  
Vol 3 (2) ◽  
pp. 21 ◽  
Author(s):  
David Manheim

An important challenge for safety in machine learning and artificial intelligence systems is a set of related failures involving specification gaming, reward hacking, fragility to distributional shifts, and Goodhart’s or Campbell’s law. This paper presents additional failure modes for interactions within multi-agent systems that are closely related. These multi-agent failure modes are more complex, more problematic, and less well understood than the single-agent case, and are also already occurring, largely unnoticed. After motivating the discussion with examples from poker-playing artificial intelligence (AI), the paper explains why these failure modes are in some senses unavoidable. Following this, the paper categorizes failure modes, provides definitions, and cites examples for each of the modes: accidental steering, coordination failures, adversarial misalignment, input spoofing and filtering, and goal co-option or direct hacking. The paper then discusses how extant literature on multi-agent AI fails to address these failure modes, and identifies work which may be useful for the mitigation of these failure modes.


2001 ◽  
Vol 16 (3) ◽  
pp. 277-284 ◽  
Author(s):  
EDUARDO ALONSO ◽  
MARK D'INVERNO ◽  
DANIEL KUDENKO ◽  
MICHAEL LUCK ◽  
JASON NOBLE

In recent years, multi-agent systems (MASs) have received increasing attention in the artificial intelligence community. Research in multi-agent systems involves the investigation of autonomous, rational and flexible behaviour of entities such as software programs or robots, and their interaction and coordination in such diverse areas as robotics (Kitano et al., 1997), information retrieval and management (Klusch, 1999), and simulation (Gilbert & Conte, 1995). When designing agent systems, it is impossible to foresee all the potential situations an agent may encounter and specify an agent behaviour optimally in advance. Agents therefore have to learn from, and adapt to, their environment, especially in a multi-agent setting.


Author(s):  
Roman Dushkin ◽  
Mikhail Grigor'evich Andronov

This article meticulously examines the questions of application of certain technologies of multi-agent systems theory in the area of unmanned traffic management for combatting the so-called “generative adversarial attacks” on the computer vision systems that are used in such vehicles. The article provides examples of generative-adversarial attacks on various types of neural networks, as well as describes the problems that arise when using computer vision. Possible solutions to these problems are proposed. Research methodology includes the theory of multi-agent systems applicable to automobile transport, which suggests using the so-called V2X-interaction, i.e. constant exchange of information between the vehicle and various actors involved in road traffic – a central control system, other vehicles, roadside infrastructure and pedestrians. The authors’ special contribution to this research lies in application of the theory of multi-agent systems for traffic arrangement with consideration of its actors as the agents with diverse roles. The novelty consists in employment of one of the methods of artificial intelligence in solution of the problems, obtained due to the use of other methods of artificial intelligence (recognition of images in computer vision). The relevance of the study is based on the detailed coverage of the questions of organization of unmanned traffic on training grounds and public roads.


2021 ◽  
Author(s):  
Qin Yang

Distributed artificial intelligence (DAI) studies artificial intelligence entities working together to reason, plan, solve problems, organize behaviors and strategies, make collective decisions and learn. This Ph.D. research proposes a principled Multi-Agent Systems (MAS) cooperation framework -- Self-Adaptive Swarm System (SASS) -- to bridge the fourth level automation gap between perception, communication, planning, execution, decision-making, and learning.


Sign in / Sign up

Export Citation Format

Share Document