Multi-Agent Deep Reinforcement Learning for Urban Traffic Light Control in Vehicular Networks

2020 ◽  
Vol 69 (8) ◽  
pp. 8243-8256 ◽  
Author(s):  
Tong Wu ◽  
Pan Zhou ◽  
Kai Liu ◽  
Yali Yuan ◽  
Xiumin Wang ◽  
...  
Sensors ◽  
2020 ◽  
Vol 20 (15) ◽  
pp. 4291 ◽  
Author(s):  
Qiang Wu ◽  
Jianqing Wu ◽  
Jun Shen ◽  
Binbin Yong ◽  
Qingguo Zhou

With smart city infrastructures growing, the Internet of Things (IoT) has been widely used in the intelligent transportation systems (ITS). The traditional adaptive traffic signal control method based on reinforcement learning (RL) has expanded from one intersection to multiple intersections. In this paper, we propose a multi-agent auto communication (MAAC) algorithm, which is an innovative adaptive global traffic light control method based on multi-agent reinforcement learning (MARL) and an auto communication protocol in edge computing architecture. The MAAC algorithm combines multi-agent auto communication protocol with MARL, allowing an agent to communicate the learned strategies with others for achieving global optimization in traffic signal control. In addition, we present a practicable edge computing architecture for industrial deployment on IoT, considering the limitations of the capabilities of network transmission bandwidth. We demonstrate that our algorithm outperforms other methods over 17% in experiments in a real traffic simulation environment.


Author(s):  
Zhengxu Yu ◽  
Shuxian Liang ◽  
Long Wei ◽  
Zhongming Jin ◽  
Jianqiang Huang ◽  
...  

Urban traffic light control is an important and challenging real-world problem. By regarding intersections as agents, most of the Reinforcement Learning (RL) based methods generate actions of agents independently. They can cause action conflict and result in overflow or road resource waste in adjacent intersections. Recently, some collaborative methods have alleviated the above problems by extending the observable surroundings of agents, which can be considered as inactive cross-agent communication methods. However, when agents act synchronously in these works, the perceived action value is biased and the information exchanged is insufficient. In this work, we propose a novel Multi-agent Communication and Action Rectification (MaCAR) framework. It enables active communication between agents by considering the impact of synchronous actions of agents. MaCAR consists of two parts: (1) an active Communication Agent Network (CAN) involving a Message Propagation Graph Neural Network (MPGNN); (2) a Traffic Forecasting Network (TFN) which learns to predict the traffic after agents' synchronous actions and the corresponding action values. By using predicted information, we mitigate the action value bias during training to help rectify agents' future actions. In experiments, we show that our proposal can outperforms state-of-the-art methods on both synthetic and real-world datasets.


Sign in / Sign up

Export Citation Format

Share Document