Optimal Signal Timing of Single Intersection for Traffic Emission Control

2014 ◽  
Vol 587-589 ◽  
pp. 2137-2140
Author(s):  
Xin Li ◽  
Feng Chen

Traffic emission is one of the main pollution sources of urban atmospheric environment. Traffic control scheme of intersection has important influence on vehicle emission. Research on low emission traffic signal control scheme has become one of focuses of Intelligent Transportation. Current typical control methods of traffic emission are based on optimizing the average delay and number of stops. However, it is extremely difficult to use mathematical formula to calculate the delay and the number of stops in the presence of initial queue length of intersection. In order to solve this problem, we proposed a traffic emission control algorithm based on reinforcement learning. The simulation experiments were carried out by using the microscopic traffic simulation software. Compared with the Hideki emission control scheme, the experimental results show that the reinforcement learning algorithm is more effective. The average vehicle emissions are reduced by 12.2% for high saturation of the intersection.

2020 ◽  
Vol 34 (01) ◽  
pp. 1153-1160 ◽  
Author(s):  
Xinshi Zang ◽  
Huaxiu Yao ◽  
Guanjie Zheng ◽  
Nan Xu ◽  
Kai Xu ◽  
...  

Using reinforcement learning for traffic signal control has attracted increasing interests recently. Various value-based reinforcement learning methods have been proposed to deal with this classical transportation problem and achieved better performances compared with traditional transportation methods. However, current reinforcement learning models rely on tremendous training data and computational resources, which may have bad consequences (e.g., traffic jams or accidents) in the real world. In traffic signal control, some algorithms have been proposed to empower quick learning from scratch, but little attention is paid to learning by transferring and reusing learned experience. In this paper, we propose a novel framework, named as MetaLight, to speed up the learning process in new scenarios by leveraging the knowledge learned from existing scenarios. MetaLight is a value-based meta-reinforcement learning workflow based on the representative gradient-based meta-learning algorithm (MAML), which includes periodically alternate individual-level adaptation and global-level adaptation. Moreover, MetaLight improves the-state-of-the-art reinforcement learning model FRAP in traffic signal control by optimizing its model structure and updating paradigm. The experiments on four real-world datasets show that our proposed MetaLight not only adapts more quickly and stably in new traffic scenarios, but also achieves better performance.


2018 ◽  
Vol 2018 ◽  
pp. 1-11
Author(s):  
Binbin Jing ◽  
Jianmin Xu

In the existing bandwidth-based methods, through traffic flows are considered as the coordination objects and offered progression bands accordingly. However, at certain times or nodes in the road network, when the left-turn traffic flows have a higher priority than the through traffic flows, it would be inappropriate to still provide the progression bands to the through traffic flows; the left-turn traffic flows should instead be considered as the coordination objects to potentially achieve better control. Considering this, a general maximum progression model to concurrently synchronize left-turn and through traffic flows is established by using a time-space diagram. The general model can deal with all the patterns of the left-turn phases by introducing two new binary variables into the constraints; that is, these variables allow all the patterns of the left-turn phases to deal with a single formulation. By using the measures of effectiveness (average delay time, average vehicle stops, and average travel time) acquired by a traffic simulation software, VISSIM, the validity of the general model is verified. The results show that, compared with the MULTIBAND, the proposed general model can effectively reduce the delay time, vehicle stops, and travel time and, thus, achieve better traffic control.


2021 ◽  
Vol 11 (1) ◽  
pp. 147-154
Author(s):  
Dmitriy Stupnikov ◽  
Andrey Tolstyh ◽  
Sergey Malyukov ◽  
Aleksey Aksenov ◽  
Sergey Novikov

Reinforcement learning is a type of machine learning algorithm. These algorithms interact with the model of the environment in which the robotic system is supposed to be used, and make it possible to obtain relatively simple approximations of effective sets of system actions to achieve the set goal. The use of reinforcement learning will allow training the model on server hardware, and in the final system use already trained neural networks, the complexity of calculating the response of which directly depends on their topology. In the presented work, a statistical calculation of a prototype of a robotic manipulator for bench research of reinforcement learning systems has been carried out. The choice of design features and materials has been substantiated; the main units and design features have been considered. The studies were carried out in the SolidWorks Simulation software. A prototype of a robotic manipulator with a sufficiently high safety margin was obtained. It is concluded that the main stress concentrator is the junction of the eyelet and the platform, however, the maximum stress value was 38.804 kgf/sm2, which is insignificant. In this case, the maximum resulting movement will be concentrated in the upper part of the eyelet, and will shift depending on the position of the manipulator arm. The maximum recorded displacement is 0.073 mm, which is negligible


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-12 ◽  
Author(s):  
Haiying Che ◽  
Zixing Bai ◽  
Rong Zuo ◽  
Honglei Li

With more businesses are running online, the scale of data centers is increasing dramatically. The task-scheduling operation with traditional heuristic algorithms is facing the challenges of uncertainty and complexity of the data center environment. It is urgent to use new technology to optimize the task scheduling to ensure the efficient task execution. This study aimed at building a new scheduling model with deep reinforcement learning algorithm, which integrated the task scheduling with resource-utilization optimization. The proposed scheduling model was trained, tested, and compared with classical scheduling algorithms on real data center datasets in experiments to show the effectiveness and efficiency. The experiment report showed that the proposed algorithm worked better than the compared classical algorithms in the key performance metrics: average delay time of tasks, task distribution in different delay time levels, and task congestion degree.


Author(s):  
Min Chee Choy ◽  
Ruey Long Cheu ◽  
Dipti Srinivasan ◽  
Filippo Logi

A multiagent architecture for real-time coordinated signal control in an urban traffic network is introduced. The multiagent architecture consists of three hierarchical layers of controller agents: intersection, zone, and regional controllers. Each controller agent is implemented by applying artificial intelligence concepts, namely, fuzzy logic, neural network, and evolutionary algorithm. From the fuzzy rule base, each individual controller agent recommends an appropriate signal policy at the end of each signal phase. These policies are later processed in a policy repository before being selected and implemented into the traffic network. To handle the changing dynamics of the complex traffic processes within the network, an online reinforcement learning module is used to update the knowledge base and inference rules of the agents. This concept of a multiagent system with online reinforcement learning was implemented in a network consisting of 25 signalized intersections in a microscopic traffic simulator. Initial test results showed that the multiagent system improved average delay and total vehicle stoppage time, compared with the effects of fixed-time traffic signal control.


2018 ◽  
Vol 10 (12) ◽  
pp. 168781401881542
Author(s):  
Yun Li ◽  
Qiyan Cai ◽  
Yujie Xu ◽  
Weihua Shi ◽  
Yibao Chen

This article is on the purpose of developing an isolated tram signal priority control strategy based on logic rule for modern tram system. The designed method is presented with features that can ensure the intersection operates in a proper manner on the premise of tram priority and avoid vehicle queue overflow when the tram passes. In this study, the new method description consists of two parts: (1) the detector locations are determined, which include the upstream detector, the upstream trigger detector, the downstream detector, and the queuing detector upon entry approach to the intersection; (2) the corresponding priority logical algorithm for signal control is designed. The proposed method is experimentally examined in a tram intersection in Huaian city, China. In the process of the experiment, the detector layout scheme and optimal priority control model are simulated and verified using the VisVAP module of the Vissim simulation software. In the experimental results, the designed scheme significantly decreases average delay than the fixed timing signal control method, while it also can prevent the vehicle queue from overflowing compared to the absolute priority actuated control scheme.


2021 ◽  
Vol 6 (7(57)) ◽  
pp. 16-18
Author(s):  
Ivan Vladimirovich Kondratov

Real-time adaptive traffic control is an important problem in modern world. Historically, various optimization methods have been used to build adaptive traffic signal control systems. Recently, reinforcement learning has been advanced, and various papers showed efficiency of Deep-Q-Learning (DQN) in solving traffic control problems and providing real-time adaptive control for traffic, decreasing traffic pressure and lowering average travel time for drivers. In this paper we consider the problem of traffic signal control, present the basics of reinforcement learning and review the latest results in this area.


Sign in / Sign up

Export Citation Format

Share Document