A model predictive control approach for decentralized traffic signal control

2008 ◽  
Vol 41 (2) ◽  
pp. 13058-13063 ◽  
Author(s):  
Ahmet Yazici ◽  
Gangdo Seo ◽  
Umit Ozguner
2019 ◽  
Vol 6 (3) ◽  
pp. 623-640 ◽  
Author(s):  
Bao-Lin Ye ◽  
Weimin Wu ◽  
Keyu Ruan ◽  
Lingxi Li ◽  
Tehuan Chen ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2302
Author(s):  
Salah Bouktif ◽  
Abderraouf Cheniki ◽  
Ali Ouni

Recent research works on intelligent traffic signal control (TSC) have been mainly focused on leveraging deep reinforcement learning (DRL) due to its proven capability and performance. DRL-based traffic signal control frameworks belong to either discrete or continuous controls. In discrete control, the DRL agent selects the appropriate traffic light phase from a finite set of phases. Whereas in continuous control approach, the agent decides the appropriate duration for each signal phase within a predetermined sequence of phases. Among the existing works, there are no prior approaches that propose a flexible framework combining both discrete and continuous DRL approaches in controlling traffic signal. Thus, our ultimate objective in this paper is to propose an approach capable of deciding simultaneously the proper phase and its associated duration. Our contribution resides in adapting a hybrid Deep Reinforcement Learning that considers at the same time discrete and continuous decisions. Precisely, we customize a Parameterized Deep Q-Networks (P-DQN) architecture that permits a hierarchical decision-making process that primarily decides the traffic light next phases and secondly specifies its the associated timing. The evaluation results of our approach using Simulation of Urban MObility (SUMO) shows its out-performance over the benchmarks. The proposed framework is able to reduce the average queue length of vehicles and the average travel time by 22.20% and 5.78%, respectively, over the alternative DRL-based TSC systems.


Sign in / Sign up

Export Citation Format

Share Document