scholarly journals Adaptation to Other Agent’s Behavior Using Meta-Strategy Learning by Collision Avoidance Simulation

2021 ◽  
Vol 11 (4) ◽  
pp. 1786
Author(s):  
Kensuke Miyamoto ◽  
Norifumi Watanabe ◽  
Yoshiyasu Takefuji

In human’s cooperative behavior, there are two strategies: a passive behavioral strategy based on others’ behaviors and an active behavioral strategy based on the objective-first. However, it is not clear how to acquire a meta-strategy to switch those strategies. The purpose of the proposed study is to create agents with the meta-strategy and to enable complex behavioral choices with a high degree of coordination. In this study, we have experimented by using multi-agent collision avoidance simulations as an example of cooperative tasks. In the experiments, we have used reinforcement learning to obtain an active strategy and a passive strategy by rewarding the interaction with agents facing each other. Furthermore, we have examined and verified the meta-strategy in situations with opponent’s strategy switched.


ACTA IMEKO ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 28
Author(s):  
Gabor Paczolay ◽  
Istvan Harmati

<p class="Abstract">Reinforcement learning is currently one of the most researched fields of artificial intelligence. New algorithms are being developed that use neural networks to compute the selected action, especially for deep reinforcement learning. One subcategory of reinforcement learning is multi-agent reinforcement learning, in which multiple agents are present in the world. As it involves the simulation of an environment, it can be applied to robotics as well. In our paper, we use our modified version of the advantage actor–critic (A2C) algorithm, which is suitable for multi-agent scenarios. We test this modified algorithm on our testbed, a cooperative–competitive pursuit–evasion environment, and later we address the problem of collision avoidance.</p>





2021 ◽  
Vol 7 ◽  
pp. e718
Author(s):  
Taeyoung Kim ◽  
Luiz Felipe Vecchietti ◽  
Kyujin Choi ◽  
Sanem Sariel ◽  
Dongsoo Har

In multi-agent reinforcement learning, the cooperative learning behavior of agents is very important. In the field of heterogeneous multi-agent reinforcement learning, cooperative behavior among different types of agents in a group is pursued. Learning a joint-action set during centralized training is an attractive way to obtain such cooperative behavior; however, this method brings limited learning performance with heterogeneous agents. To improve the learning performance of heterogeneous agents during centralized training, two-stage heterogeneous centralized training which allows the training of multiple roles of heterogeneous agents is proposed. During training, two training processes are conducted in a series. One of the two stages is to attempt training each agent according to its role, aiming at the maximization of individual role rewards. The other is for training the agents as a whole to make them learn cooperative behaviors while attempting to maximize shared collective rewards, e.g., team rewards. Because these two training processes are conducted in a series in every time step, agents can learn how to maximize role rewards and team rewards simultaneously. The proposed method is applied to 5 versus 5 AI robot soccer for validation. The experiments are performed in a robot soccer environment using Webots robot simulation software. Simulation results show that the proposed method can train the robots of the robot soccer team effectively, achieving higher role rewards and higher team rewards as compared to other three approaches that can be used to solve problems of training cooperative multi-agent. Quantitatively, a team trained by the proposed method improves the score concede rate by 5% to 30% when compared to teams trained with the other approaches in matches against evaluation teams.



IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 216320-216331
Author(s):  
Hui Liu ◽  
Zhen Zhang ◽  
Dongqing Wang


Author(s):  
Fumito Uwano ◽  
◽  
Keiki Takadama

This study discusses important factors for zero communication, multi-agent cooperation by comparing different modified reinforcement learning methods. The two learning methods used for comparison were assigned different goal selections for multi-agent cooperation tasks. The first method is called Profit Minimizing Reinforcement Learning (PMRL); it forces agents to learn how to reach the farthest goal, and then the agent closest to the goal is directed to the goal. The second method is called Yielding Action Reinforcement Learning (YARL); it forces agents to learn through a Q-learning process, and if the agents have a conflict, the agent that is closest to the goal learns to reach the next closest goal. To compare the two methods, we designed experiments by adjusting the following maze factors: (1) the location of the start point and goal; (2) the number of agents; and (3) the size of maze. The intensive simulations performed on the maze problem for the agent cooperation task revealed that the two methods successfully enabled the agents to exhibit cooperative behavior, even if the size of the maze and the number of agents change. The PMRL mechanism always enables the agents to learn cooperative behavior, whereas the YARL mechanism makes the agents learn cooperative behavior over a small number of learning iterations. In zero communication, multi-agent cooperation, it is important that only agents that have a conflict cooperate with each other.



2021 ◽  
Vol 9 (10) ◽  
pp. 1056
Author(s):  
Chen Chen ◽  
Feng Ma ◽  
Xiaobin Xu ◽  
Yuwang Chen ◽  
Jin Wang

Ships are special machineries with large inertias and relatively weak driving forces. Simulating the manual operations of manipulating ships with artificial intelligence (AI) and machine learning techniques becomes more and more common, in which avoiding collisions in crowded waters may be the most challenging task. This research proposes a cooperative collision avoidance approach for multiple ships using a multi-agent deep reinforcement learning (MADRL) algorithm. Specifically, each ship is modeled as an individual agent, controlled by a Deep Q-Network (DQN) method and described by a dedicated ship motion model. Each agent observes the state of itself and other ships as well as the surrounding environment. Then, agents analyze the navigation situation and make motion decisions accordingly. In particular, specific reward function schemas are designed to simulate the degree of cooperation among agents. According to the International Regulations for Preventing Collisions at Sea (COLREGs), three typical scenarios of simulation, which are head-on, overtaking and crossing, are established to validate the proposed approach. With sufficient training of MADRL, the ship agents were capable of avoiding collisions through cooperation in narrow crowded waters. This method provides new insights for bionic modeling of ship operations, which is of important theoretical and practical significance.



CICTP 2020 ◽  
2020 ◽  
Author(s):  
Yang Zhao ◽  
Jian-Ming Hu ◽  
Ming-Yang Gao ◽  
Zuo Zhang


Sign in / Sign up

Export Citation Format

Share Document