A GPU-Based Programming Framework for Highly-Scalable Multi-Agent Traffic Simulations

Author(s):  
Yoshihito Sano ◽  
◽  
Naoki Fukuta

Highly detailed reproducibility of multi-agent simulations is strongly demanded. To realize such highly reproducible multi-agent simulations, it is important to make each agent respond to its dynamically changing environment as well as scale the simulation to cover important phenomena that could be produced. In this paper, we present a programming framework to realize highly scalable execution of them as well as detailed behaviors of agents. The framework can help simulation developers utilize many GPGPU-based parallel cores in their simulation programs by using the proposed OpenCL-based multi-platform agent code conversion engine. We show our prototype implementation of the framework and how our framework can help simulation developers to code, test, and evaluate their agent codes which select actions and path plants reactively in dynamically changing large-scale simulation environments on various hardware and software settings.

Author(s):  
D.Zh. Akhmed-Zaki ◽  
T.S. Imankulov ◽  
B. Matkerim ◽  
B.S. Daribayev ◽  
K.A. Aidarov ◽  
...  

Symmetry ◽  
2020 ◽  
Vol 12 (4) ◽  
pp. 631
Author(s):  
Chunyang Hu

In this paper, deep reinforcement learning (DRL) and knowledge transfer are used to achieve the effective control of the learning agent for the confrontation in the multi-agent systems. Firstly, a multi-agent Deep Deterministic Policy Gradient (DDPG) algorithm with parameter sharing is proposed to achieve confrontation decision-making of multi-agent. In the process of training, the information of other agents is introduced to the critic network to improve the strategy of confrontation. The parameter sharing mechanism can reduce the loss of experience storage. In the DDPG algorithm, we use four neural networks to generate real-time action and Q-value function respectively and use a momentum mechanism to optimize the training process to accelerate the convergence rate for the neural network. Secondly, this paper introduces an auxiliary controller using a policy-based reinforcement learning (RL) method to achieve the assistant decision-making for the game agent. In addition, an effective reward function is used to help agents balance losses of enemies and our side. Furthermore, this paper also uses the knowledge transfer method to extend the learning model to more complex scenes and improve the generalization of the proposed confrontation model. Two confrontation decision-making experiments are designed to verify the effectiveness of the proposed method. In a small-scale task scenario, the trained agent can successfully learn to fight with the competitors and achieve a good winning rate. For large-scale confrontation scenarios, the knowledge transfer method can gradually improve the decision-making level of the learning agent.


1977 ◽  
Vol 3 (1/2) ◽  
pp. 126
Author(s):  
W. Brian Arthur ◽  
Geoffrey McNicoll

Sign in / Sign up

Export Citation Format

Share Document