scholarly journals Applying Hierarchal Clusters on Deep Reinforcement Learning Controlled Traffic Network

2021 ◽  
Vol 30 (1) ◽  
pp. 91-96
Author(s):  
Ahmed El-Mahalawy ◽  
Ahmed Shouman ◽  
Ayman El-Sayed ◽  
Fady Taher
2020 ◽  
Vol 0 (0) ◽  
pp. 0-0
Author(s):  
fady taher ◽  
Ahmed Elmahalawy ◽  
Ahmed Shouman ◽  
Ayman EL-SAYED

Author(s):  
Min Chee Choy ◽  
Ruey Long Cheu ◽  
Dipti Srinivasan ◽  
Filippo Logi

A multiagent architecture for real-time coordinated signal control in an urban traffic network is introduced. The multiagent architecture consists of three hierarchical layers of controller agents: intersection, zone, and regional controllers. Each controller agent is implemented by applying artificial intelligence concepts, namely, fuzzy logic, neural network, and evolutionary algorithm. From the fuzzy rule base, each individual controller agent recommends an appropriate signal policy at the end of each signal phase. These policies are later processed in a policy repository before being selected and implemented into the traffic network. To handle the changing dynamics of the complex traffic processes within the network, an online reinforcement learning module is used to update the knowledge base and inference rules of the agents. This concept of a multiagent system with online reinforcement learning was implemented in a network consisting of 25 signalized intersections in a microscopic traffic simulator. Initial test results showed that the multiagent system improved average delay and total vehicle stoppage time, compared with the effects of fixed-time traffic signal control.


2021 ◽  
Vol 7 ◽  
pp. e428
Author(s):  
Guilherme Dytz dos Santos ◽  
Ana L.C. Bazzan

With the increase in the use of private transportation, developing more efficient ways to distribute routes in a traffic network has become more and more important. Several attempts to address this issue have already been proposed, either by using a central authority to assign routes to the vehicles, or by means of a learning process where drivers select their best routes based on their previous experiences. The present work addresses a way to connect reinforcement learning to new technologies such as car-to-infrastructure communication in order to augment the drivers knowledge in an attempt to accelerate the learning process. Our method was compared to both a classical, iterative approach, as well as to standard reinforcement learning without communication. Results show that our method outperforms both of them. Further, we have performed robustness tests, by allowing messages to be lost, and by reducing the storage capacity of the communication devices. We were able to show that our method is not only tolerant to information loss, but also points out to improved performance when not all agents get the same information. Hence, we stress the fact that, before deploying communication in urban scenarios, it is necessary to take into consideration that the quality and diversity of information shared are key aspects.


Decision ◽  
2016 ◽  
Vol 3 (2) ◽  
pp. 115-131 ◽  
Author(s):  
Helen Steingroever ◽  
Ruud Wetzels ◽  
Eric-Jan Wagenmakers

Sign in / Sign up

Export Citation Format

Share Document