Nonlinear Control for Multi-agent Formations with Delays in Noisy Environments

2014 ◽  
Vol 40 (12) ◽  
pp. 2959-2967 ◽  
Author(s):  
Xiao-Qing LU ◽  
Yao-Nan WANG ◽  
Jian-Xu MAO
Entropy ◽  
2021 ◽  
Vol 23 (9) ◽  
pp. 1133
Author(s):  
Shanzhi Gu ◽  
Mingyang Geng ◽  
Long Lan

The aim of multi-agent reinforcement learning systems is to provide interacting agents with the ability to collaboratively learn and adapt to the behavior of other agents. Typically, an agent receives its private observations providing a partial view of the true state of the environment. However, in realistic settings, the harsh environment might cause one or more agents to show arbitrarily faulty or malicious behavior, which may suffice to allow the current coordination mechanisms fail. In this paper, we study a practical scenario of multi-agent reinforcement learning systems considering the security issues in the presence of agents with arbitrarily faulty or malicious behavior. The previous state-of-the-art work that coped with extremely noisy environments was designed on the basis that the noise intensity in the environment was known in advance. However, when the noise intensity changes, the existing method has to adjust the configuration of the model to learn in new environments, which limits the practical applications. To overcome these difficulties, we present an Attention-based Fault-Tolerant (FT-Attn) model, which can select not only correct, but also relevant information for each agent at every time step in noisy environments. The multihead attention mechanism enables the agents to learn effective communication policies through experience concurrent with the action policies. Empirical results showed that FT-Attn beats previous state-of-the-art methods in some extremely noisy environments in both cooperative and competitive scenarios, much closer to the upper-bound performance. Furthermore, FT-Attn maintains a more general fault tolerance ability and does not rely on the prior knowledge about the noise intensity of the environment.


2016 ◽  
Vol 04 (01) ◽  
pp. 75-81
Author(s):  
Tengfei Liu ◽  
Zhong-Ping Jiang

This paper studies the distributed nonlinear control of multi-agent systems with switching topologies for output agreement. A novel cyclic-small-gain approach is proposed. The crucial idea is to introduce a new dynamic mechanism to process the exchanged information between the agents, and to transform the distributed control problem into a stabilization problem for a dynamic network composed of input-to-output stable (IOS) subsystems. The desired distributed controller is designed based on IOS and cyclic-small-gain techniques. More interestingly, it is shown that the proposed method can be extended to distributed control design in the presence of disturbances.


2013 ◽  
Vol 8 (1) ◽  
pp. 32-46 ◽  
Author(s):  
Hung Manh La ◽  
Weihua Sheng

Sign in / Sign up

Export Citation Format

Share Document