Cyber-physical Optimal Traffic Signal Control Based on a Macroscopic Traffic Model and Its Verification Using Microscopic Traffic Simulator

2020 ◽  
Vol 56 (3) ◽  
pp. 106-115
Author(s):  
Hayato DAN ◽  
Ryota OKAMOTO ◽  
Takeshi HATANAKA ◽  
Masakazu MUKAI ◽  
Yutaka IINO
2021 ◽  
Author(s):  
Maxim Friesen ◽  
Tian Tan ◽  
Jürgen Jasperneite ◽  
Jie Wang

Increasing traffic congestion leads to significant costs associated by additional travel delays, whereby poorly configured signaled intersections are a common bottleneck and root cause. Traditional traffic signal control (TSC) systems employ rule-based or heuristic methods to decide signal timings, while adaptive TSC solutions utilize a traffic-actuated control logic to increase their adaptability to real-time traffic changes. However, such systems are expensive to deploy and are often not flexible enough to adequately adapt to the volatility of today's traffic dynamics. More recently, this problem became a frontier topic in the domain of deep reinforcement learning (DRL) and enabled the development of multi-agent DRL approaches that could operate in environments with several agents present, such as traffic systems with multiple signaled intersections. However, most of these proposed approaches were validated using artificial traffic grids. This paper therefore presents a case study, where real-world traffic data from the town of Lemgo in Germany is used to create a realistic road model within VISSIM. A multi-agent DRL setup, comprising multiple independent deep Q-networks, is applied to the simulated traffic network. Traditional rule-based signal controls, currently employed in the real world at the studied intersections, are integrated in the traffic model with LISA+ and serve as a performance baseline. Our performance evaluation indicates a significant reduction of traffic congestion when using the RL-based signal control policy over the conventional TSC approach in LISA+. Consequently, this paper reinforces the applicability of RL concepts in the domain of TSC engineering by employing a highly realistic traffic model.


2021 ◽  
Author(s):  
Maxim Friesen ◽  
Tian Tan ◽  
Jürgen Jasperneite ◽  
Jie Wang

Increasing traffic congestion leads to significant costs associated by additional travel delays, whereby poorly configured signaled intersections are a common bottleneck and root cause. Traditional traffic signal control (TSC) systems employ rule-based or heuristic methods to decide signal timings, while adaptive TSC solutions utilize a traffic-actuated control logic to increase their adaptability to real-time traffic changes. However, such systems are expensive to deploy and are often not flexible enough to adequately adapt to the volatility of today's traffic dynamics. More recently, this problem became a frontier topic in the domain of deep reinforcement learning (DRL) and enabled the development of multi-agent DRL approaches that could operate in environments with several agents present, such as traffic systems with multiple signaled intersections. However, most of these proposed approaches were validated using artificial traffic grids. This paper therefore presents a case study, where real-world traffic data from the town of Lemgo in Germany is used to create a realistic road model within VISSIM. A multi-agent DRL setup, comprising multiple independent deep Q-networks, is applied to the simulated traffic network. Traditional rule-based signal controls, currently employed in the real world at the studied intersections, are integrated in the traffic model with LISA+ and serve as a performance baseline. Our performance evaluation indicates a significant reduction of traffic congestion when using the RL-based signal control policy over the conventional TSC approach in LISA+. Consequently, this paper reinforces the applicability of RL concepts in the domain of TSC engineering by employing a highly realistic traffic model.


2011 ◽  
Vol 131 (2) ◽  
pp. 303-310
Author(s):  
Ji-Sun Shin ◽  
Cheng-You Cui ◽  
Tae-Hong Lee ◽  
Hee-hyol Lee

2021 ◽  
Vol 22 (2) ◽  
pp. 12-18 ◽  
Author(s):  
Hua Wei ◽  
Guanjie Zheng ◽  
Vikash Gayah ◽  
Zhenhui Li

Traffic signal control is an important and challenging real-world problem that has recently received a large amount of interest from both transportation and computer science communities. In this survey, we focus on investigating the recent advances in using reinforcement learning (RL) techniques to solve the traffic signal control problem. We classify the known approaches based on the RL techniques they use and provide a review of existing models with analysis on their advantages and disadvantages. Moreover, we give an overview of the simulation environments and experimental settings that have been developed to evaluate the traffic signal control methods. Finally, we explore future directions in the area of RLbased traffic signal control methods. We hope this survey could provide insights to researchers dealing with real-world applications in intelligent transportation systems


Sign in / Sign up

Export Citation Format

Share Document