Traffic Signal control optimization based on fuzzy neural network

Author(s):  
Dongyao Jia ◽  
Zuo Chen
2013 ◽  
Author(s):  
I Gede Pasek Suta Wijaya ◽  
Keiichi Uchimura ◽  
Gou Koutaki ◽  
Toshiki Nishihara ◽  
Syunta Matsumoto ◽  
...  

2018 ◽  
Vol 45 (8) ◽  
pp. 690-702 ◽  
Author(s):  
Mohammad Aslani ◽  
Stefan Seipel ◽  
Marco Wiering

Traffic signal control can be naturally regarded as a reinforcement learning problem. Unfortunately, it is one of the most difficult classes of reinforcement learning problems owing to its large state space. A straightforward approach to address this challenge is to control traffic signals based on continuous reinforcement learning. Although they have been successful in traffic signal control, they may become unstable and fail to converge to near-optimal solutions. We develop adaptive traffic signal controllers based on continuous residual reinforcement learning (CRL-TSC) that is more stable. The effect of three feature functions is empirically investigated in a microscopic traffic simulation. Furthermore, the effects of departing streets, more actions, and the use of the spatial distribution of the vehicles on the performance of CRL-TSCs are assessed. The results show that the best setup of the CRL-TSC leads to saving average travel time by 15% in comparison to an optimized fixed-time controller.


2011 ◽  
Vol 2-3 ◽  
pp. 91-95
Author(s):  
Li Bi Fu ◽  
Kil To Chong

As one kind of reinforcement learning method, Q learning algorithm has already been proved to achieve many significant results in traffic signal control area. However, when the state of Markov Decision Process is very big or continuous, the computation load and the memory load will become very big and can not be solved then. Therefore, this paper proposed a neural network based Q learning algorithm to solve this problem known as “Curse of Dimensionality”. This new method realized generalization of conventional Q learnig algorithm in huge and continuous state space as neural network is a very effective value function approximator. Experiment has been implemented upon an isolated intersection and simulation results show that the proposed method can improve the traffic efficiency significantly than the conventional Q learning algorithm.


Sign in / Sign up

Export Citation Format

Share Document