In-Depth Evaluation of Reinforcement Learning Based Adaptive Traffic Signal Control Using TSCLAB

Author(s):  
Daniel Pavleski ◽  
Mladen Miletić ◽  
Daniela Koltovska Nečoska ◽  
Edouard Ivanjko
2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Duowei Li ◽  
Jianping Wu ◽  
Ming Xu ◽  
Ziheng Wang ◽  
Kezhen Hu

Controlling traffic signals to alleviate increasing traffic pressure is a concept that has received public attention for a long time. However, existing systems and methodologies for controlling traffic signals are insufficient for addressing the problem. To this end, we build a truly adaptive traffic signal control model in a traffic microsimulator, i.e., “Simulation of Urban Mobility” (SUMO), using the technology of modern deep reinforcement learning. The model is proposed based on a deep Q-network algorithm that precisely represents the elements associated with the problem: agents, environments, and actions. The real-time state of traffic, including the number of vehicles and the average speed, at one or more intersections is used as an input to the model. To reduce the average waiting time, the agents provide an optimal traffic signal phase and duration that should be implemented in both single-intersection cases and multi-intersection cases. The co-operation between agents enables the model to achieve an improvement in overall performance in a large road network. By testing with data sets pertaining to three different traffic conditions, we prove that the proposed model is better than other methods (e.g., Q-learning method, longest queue first method, and Webster fixed timing control method) for all cases. The proposed model reduces both the average waiting time and travel time, and it becomes more advantageous as the traffic environment becomes more complex.


Sign in / Sign up

Export Citation Format

Share Document