A Framework for Multi-Agent Systems with Multi-Modal User Interfaces in Distributed Computing Environments

Author(s):  
Sankyu Park ◽  
Key-Sun Choi ◽  
K. H. (Kane) Kim

In current multi-agent systems, the user is typically interacting with a single agent at a time through relatively inflexible and modestly intelligent interfaces. As a consequence, these systems force the users to submit simplistic requests only and suffer from problems such as the low-level nature of the system services offered to users, the weak reusability of agents, and the weak extensibility of the systems. In this paper, a framework for multi-agent systems called the open agent architecture (OAA) which reduces such problems, is discussed. The OAA is designed to handle complex requests that involve multiple agents. In some cases of complex requests from users, the components of the requests do not directly correspond to the capabilities of various application agents, and therefore, the system is required to translate the user's model of the task into the system's model before apportioning subtasks to the agents. To maximize users' efficiency in generating this type of complex requests, the OAA offers an intelligent multi-modal user interface agent which supports a natural language interface with a mix of spoken language, handwriting, and gesture. The effectiveness of the OAA environment including the intelligent distributed multi-modal interface has been observed in our development of several practical multi-agent systems.

2006 ◽  
Vol 15 (02) ◽  
pp. 251-285 ◽  
Author(s):  
VIRGIL ANDRONACHE ◽  
MATTHIAS SCHEUTZ

In this paper we present the agent architecture development environment ADE, intended for the design, implementation, and testing of distributed agent architectures. After a short review of architecture development tools, we discuss ADE's unique features that place it in the intersection of multi-agent systems and development kits for single agent architectures. A detailed discussion of the general properties of ADE, its implementation philosophy, and its user interface is followed by examples from virtual and robotic domains that illustrate how ADE can be used for designing, implementing, testing, and running agent architectures.


Author(s):  
Chengzhi Yuan

This paper addresses the problem of leader-following consensus control of general linear multi-agent systems (MASs) with diverse time-varying input delays under the integral quadratic constraint (IQC) framework. A novel exact-memory distributed output-feedback delay controller structure is proposed, which utilizes not only relative estimation state information from neighboring agents but also local real-time information of time delays and the associated dynamic IQC-induced states from the agent itself for feedback control. As a result, the distributed consensus problem can be decomposed into H∞ stabilization subproblems for a set of independent linear fractional transformation (LFT) systems, whose dimensions are equal to that of a single agent plant plus the associated local IQC dynamics. New delay control synthesis conditions for each subproblem are fully characterized as linear matrix inequalities (LMIs). A numerical example is used to demonstrate the proposed approach.


2019 ◽  
Vol 3 (2) ◽  
pp. 21 ◽  
Author(s):  
David Manheim

An important challenge for safety in machine learning and artificial intelligence systems is a set of related failures involving specification gaming, reward hacking, fragility to distributional shifts, and Goodhart’s or Campbell’s law. This paper presents additional failure modes for interactions within multi-agent systems that are closely related. These multi-agent failure modes are more complex, more problematic, and less well understood than the single-agent case, and are also already occurring, largely unnoticed. After motivating the discussion with examples from poker-playing artificial intelligence (AI), the paper explains why these failure modes are in some senses unavoidable. Following this, the paper categorizes failure modes, provides definitions, and cites examples for each of the modes: accidental steering, coordination failures, adversarial misalignment, input spoofing and filtering, and goal co-option or direct hacking. The paper then discusses how extant literature on multi-agent AI fails to address these failure modes, and identifies work which may be useful for the mitigation of these failure modes.


Author(s):  
F. M. T. BRAZIER ◽  
C. M. JONKER ◽  
J. TREUR ◽  
N. J. E. WIJNGAARDS

Evolution of automated systems, in particular evolution of automated agents based on agent deliberation, is the topic of this paper. Evolution is not a merely material process, it requires interaction within and between individuals, their environments and societies of agents. An architecture for an individual agent capable of (1) deliberation about the creation of new agents, and (2) (run-time) creation of a new agent on the basis of this, is presented. The agent architecture is based on an existing generic agent model, and includes explicit formal conceptual representations of both design structures of agents and (behavioural) properties of agents. The process of deliberation is based on an existing generic reasoning model of design. The architecture has been designed using the compositional development method DESIRE, and has been tested in a prototype implementation.


2008 ◽  
Vol 3 (1) ◽  
pp. 1-24 ◽  
Author(s):  
Vincent Hilaire ◽  
Abder Koukam ◽  
Sebastian Rodriguez

Author(s):  
Yong Liu ◽  
Yujing Hu ◽  
Yang Gao ◽  
Yingfeng Chen ◽  
Changjie Fan

Many real-world problems, such as robot control and soccer game, are naturally modeled as sparse-interaction multi-agent systems. Reutilizing single-agent knowledge in multi-agent systems with sparse interactions can greatly accelerate the multi-agent learning process. Previous works rely on bisimulation metric to define Markov decision process (MDP) similarity for controlling knowledge transfer. However, bisimulation metric is costly to compute and is not suitable for high-dimensional state space problems. In this work, we propose more scalable transfer learning methods based on a novel MDP similarity concept. We start by defining the MDP similarity based on the N-step return (NSR) values of an MDP. Then, we propose two knowledge transfer methods based on deep neural networks called direct value function transfer and NSR-based value function transfer. We conduct experiments in image-based grid world, multi-agent particle environment (MPE) and Ms. Pac-Man game. The results indicate that the proposed methods can significantly accelerate multi-agent reinforcement learning and meanwhile get better asymptotic performance.


Author(s):  
Boldur E. Bărbat ◽  
Sorin C. Negulescu

Extending metaphorically the Moisilean idea of “nuanced-reasoning logic” and adapting it to the e-world age of Information Technology (IT), the paper aims at showing that new logics, already useful in modern software engineering, become necessary mainly for Multi-Agent Systems (MAS), despite obvious adversities. The first sections are typical for a position paper, defending such logics from an anthropocentric perspective. Through this sieve, Section 4 outlines the features asked for by the paradigm of computing as intelligent interaction, based on “nuances of nuanced-reasoning”, that should be reflected by agent logics. To keep the approach credible, Section 5 illustrates how quantifiable synergy can be reached - even in advanced challenging domains, such as stigmergic coordination - by injecting symbolic reasoning in systems based on sub-symbolic “emergent synthesis”. Since for future work too the preferred logics are doxastic, the conclusions could be structured in line with the well-known agent architecture: Beliefs, Desires, Intentions.


2019 ◽  
Vol 1 (2) ◽  
pp. 590-610
Author(s):  
Zohreh Akbari ◽  
Rainer Unland

Sequential Decision Making Problems (SDMPs) that can be modeled as Markov Decision Processes can be solved using methods that combine Dynamic Programming (DP) and Reinforcement Learning (RL). Depending on the problem scenarios and the available Decision Makers (DMs), such RL algorithms may be designed for single-agent systems or multi-agent systems that either consist of agents with individual goals and decision making capabilities, which are influenced by other agent’s decisions, or behave as a swarm of agents that collaboratively learn a single objective. Many studies have been conducted in this area; however, when concentrating on available swarm RL algorithms, one obtains a clear view of the areas that still require attention. Most of the studies in this area focus on homogeneous swarms and so far, systems introduced as Heterogeneous Swarms (HetSs) merely include very few, i.e., two or three sub-swarms of homogeneous agents, which either, according to their capabilities, deal with a specific sub-problem of the general problem or exhibit different behaviors in order to reduce the risk of bias. This study introduces a novel approach that allows agents, which are originally designed to solve different problems and hence have higher degrees of heterogeneity, to behave as a swarm when addressing identical sub-problems. In fact, the affinity between two agents, which measures the compatibility of agents to work together towards solving a specific sub-problem, is used in designing a Heterogeneous Swarm RL (HetSRL) algorithm that allows HetSs to solve the intended SDMPs.


Sign in / Sign up

Export Citation Format

Share Document