Fast reinforcement learning approach to cooperative behavior acquisition in multi-agent system

Author(s):  
Songhao Piao ◽  
Bingrong Hong
2014 ◽  
pp. 39-44
Author(s):  
Anton Kabysh ◽  
Vladimir Golovko ◽  
Arunas Lipnickas

This paper describes a multi-agent influence learning approach and reinforcement learning adaptation to it. This learning technique is used for distributed, adaptive and self-organizing control in multi-agent system. This technique is quite simple and uses agent’s influences to estimate learning error between them. The best influences are rewarded via reinforcement learning which is a well-proven learning technique. It is shown that this learning rule supports positive-reward interactions between agents and does not require any additional information than standard reinforcement learning algorithm. This technique produces optimal behavior of multi-agent system with fast convergence patterns.


2010 ◽  
Vol 20-23 ◽  
pp. 1292-1298
Author(s):  
De Jia Shi ◽  
Zhi Qiang Liu ◽  
Jing He

Mulit-agent system[MAS] research on learning has been in the area of negotiation, and learning strategies of other agents.This paper presents an agent learning approach in multi-agent system based on Bayesian learning, it researches to develop agents that learn free-text queries and keyword searches in MAS. The MAS learns to identify an appropriate agent to answer free-text and natural language queries as well as keyword searches submitted by users. The paper describes how Bayesian learning is implemented in MAS, and analyzes the effectiveness of MAS learning based on the Bayesian learning approach by analyzing the accuracy and degree of learning.


2020 ◽  
Vol 17 (2) ◽  
pp. 647-664
Author(s):  
Yangyang Ge ◽  
Fei Zhu ◽  
Wei Huang ◽  
Peiyao Zhao ◽  
Quan Liu

Multi-Agent system has broad application in real world, whose security performance, however, is barely considered. Reinforcement learning is one of the most important methods to resolve Multi-Agent problems. At present, certain progress has been made in applying Multi-Agent reinforcement learning to robot system, man-machine match, and automatic, etc. However, in the above area, an agent may fall into unsafe states where the agent may find it difficult to bypass obstacles, to receive information from other agents and so on. Ensuring the safety of Multi-Agent system is of great importance in the above areas where an agent may fall into dangerous states that are irreversible, causing great damage. To solve the safety problem, in this paper we introduce a Multi-Agent Cooperation Q-Learning Algorithm based on Constrained Markov Game. In this method, safety constraints are added to the set of actions, and each agent, when interacting with the environment to search for optimal values, should be restricted by the safety rules, so as to obtain an optimal policy that satisfies the security requirements. Since traditional Multi-Agent reinforcement learning algorithm is no more suitable for the proposed model in this paper, a new solution is introduced for calculating the global optimum state-action function that satisfies the safety constraints. We take advantage of the Lagrange multiplier method to determine the optimal action that can be performed in the current state based on the premise of linearizing constraint functions, under conditions that the state-action function and the constraint function are both differentiable, which not only improves the efficiency and accuracy of the algorithm, but also guarantees to obtain the global optimal solution. The experiments verify the effectiveness of the algorithm.


Sign in / Sign up

Export Citation Format

Share Document