scholarly journals Consensus of Julia Sets

2022 ◽  
Vol 6 (1) ◽  
pp. 43
Author(s):  
Weihua Sun ◽  
Shutang Liu

The Julia set is one of the most important sets in fractal theory. The previous studies on Julia sets mainly focused on the properties and graph of a single Julia set. In this paper, activated by the consensus of multi-agent systems, the consensus of Julia sets is introduced. Moreover, two types of the consensus of Julia sets are proposed: one is with a leader and the other is with no leaders. Then, controllers are designed to achieve the consensus of Julia sets. The consensus of Julia sets allows multiple different Julia sets to be coupled. In practical applications, the consensus of Julia sets provides a tool to study the consensus of group behaviors depicted by a Julia set. The simulations illustrate the efficacy of these methods.

2019 ◽  
Vol 07 (01) ◽  
pp. 55-64 ◽  
Author(s):  
James A. Douthwaite ◽  
Shiyu Zhao ◽  
Lyudmila S. Mihaylova

This paper presents a critical analysis of some of the most promising approaches to geometric collision avoidance in multi-agent systems, namely, the velocity obstacle (VO), reciprocal velocity obstacle (RVO), hybrid-reciprocal velocity obstacle (HRVO) and optimal reciprocal collision avoidance (ORCA) approaches. Each approach is evaluated with respect to increasing agent populations and variable sensing assumptions. In implementing the localized avoidance problem, the author notes a problem of symmetry not considered in the literature. An intensive 1000-cycle Monte Carlo analysis is used to assess the performance of the selected algorithms in the presented conditions. The ORCA method is shown to yield the most scalable computation times and collision likelihood in the presented cases. The HRVO method is shown to be superior than the other methods in dealing with obstacle trajectory uncertainty for the purposes of collision avoidance. The respective features and limitations of each algorithm are discussed and presented through examples.


Author(s):  
FRANCISCO J. MARTÍN ◽  
ENRIC PLAZA ◽  
JOSEP LLUÍS ARCOS

This article addresses an extension of the knowledge modeling approaches, namely to multi-agent systems where communication and coordination are necessary. We propose the notion of competent agent and define the basic capabilities of these agents for the extension to be effective. An agent is competent when it is capable of reasoning about its own competence and that of the other agents with which it cooperates in a given domain. In our framework, an agent has competence models of itself and of its acquaintances from which it can decide, for a specific problem to be solved, the type of cooperative activity it can request and from which agent. In this paper we focus on societies of peer agents, i.e. agents that are able to solve the same type of task but that may have different degrees of competence for specific problem ranges.


1996 ◽  
Vol 4 ◽  
pp. 477-507 ◽  
Author(s):  
R. I. Brafman ◽  
M. Tennenholtz

Motivated by the control theoretic distinction between controllable and uncontrollable events, we distinguish between two types of agents within a multi-agent system: controllable agents, which are directly controlled by the system's designer, and uncontrollable agents, which are not under the designer's direct control. We refer to such systems as partially controlled multi-agent systems, and we investigate how one might influence the behavior of the uncontrolled agents through appropriate design of the controlled agents. In particular, we wish to understand which problems are naturally described in these terms, what methods can be applied to influence the uncontrollable agents, the effectiveness of such methods, and whether similar methods work across different domains. Using a game-theoretic framework, this paper studies the design of partially controlled multi-agent systems in two contexts: in one context, the uncontrollable agents are expected utility maximizers, while in the other they are reinforcement learners. We suggest different techniques for controlling agents' behavior in each domain, assess their success, and examine their relationship.


Author(s):  
Julian Padget ◽  
Marina De Vos ◽  
Charlie Ann Page

Normative capabilities in multi-agent systems (MAS) can be represented within agents, separately as institutions, or a blend of the two. This paper addresses how to extend the principles of open MAS to the provision of normative reasoning capabilities, which are currently either embedded in existing MAS platforms - tightly coupled and inaccessible - or not present. We use a resource-oriented architecture (ROA) pattern, that we call deontic sensors, to make normative reasoning part of an open MAS architecture. The pattern specifies how to loosely couple MAS and normative frameworks, such that each is agnostic of the other, while augmenting the brute facts that an agent perceives with institutional facts, that capture each institution's interpretation of an agent's action. In consequence, a MAS without normative capabilities can acquire them, and an embedded normative framework can be de-coupled and opened to other MAS platforms. More importantly, the deontic sensor pattern allows normative reasoning to be published as services, opening routes to certification and re-use, creation of (formalized) trust and non-specialist access to "on demand'' normative reasoning.


Sign in / Sign up

Export Citation Format

Share Document