scholarly journals Roadmap Based Pursuit-Evasion and Collision Avoidance

Author(s):  
Volkan Isler ◽  
Dengfeng Sun ◽  
Shankar Sastry
ACTA IMEKO ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 28
Author(s):  
Gabor Paczolay ◽  
Istvan Harmati

<p class="Abstract">Reinforcement learning is currently one of the most researched fields of artificial intelligence. New algorithms are being developed that use neural networks to compute the selected action, especially for deep reinforcement learning. One subcategory of reinforcement learning is multi-agent reinforcement learning, in which multiple agents are present in the world. As it involves the simulation of an environment, it can be applied to robotics as well. In our paper, we use our modified version of the advantage actor–critic (A2C) algorithm, which is suitable for multi-agent scenarios. We test this modified algorithm on our testbed, a cooperative–competitive pursuit–evasion environment, and later we address the problem of collision avoidance.</p>


2021 ◽  
Vol 65 (2) ◽  
pp. 160-166
Author(s):  
Gabor Paczolay ◽  
Istvan Harmati

In this paper we visit the problem of pursuit and evasion and specifically, the collision avoidance during the problem. Two distinct tasks are visited: the first is a scenario when the agents can communicate with each other online, meanwhile in the second scenario they have to only rely on the state information and the knowledge about other agents' actions. We propose a method combining the already existing Minimax-Q and Nash-Q algorithms to provide a solution that can better take the enemy as well as friendly agents' actions into consideration. This combination is a simple weighting of the two algorithms with the Minimax-Q algorithm being based on a linear programming problem.


2011 ◽  
Author(s):  
Dan Shen ◽  
Khanh Pham ◽  
Erik Blasch ◽  
Huimin Chen ◽  
Genshe Chen

Author(s):  
Tomotaka WADA ◽  
Yuki NAKANISHI ◽  
Ryohta YAMAGUCHI ◽  
Kazushi FUJIMOTO ◽  
Hiromi OKADA

Sign in / Sign up

Export Citation Format

Share Document