Proactive Car-Following Using Deep-Reinforcement Learning

Author(s):  
Yi-Tung Yen ◽  
Jyun-Jhe Chou ◽  
Chi-Sheng Shih ◽  
Chih-Wei Chen ◽  
Pei-Kuei Tsung
2021 ◽  
Vol 2083 (3) ◽  
pp. 032008
Author(s):  
Jie Ren

Abstract Based on reinforcement learning technology, this paper establishes a new driverless car following model. DQN algorithm and traffic simulator are mainly used to train the agent, and the following model is finally obtained. Under the precise and controllable experimental environment, the preset optimization targets can achieve the expected assumption and complete the following behavior. This study will contribute to the development of unmanned vehicles in the future.


2019 ◽  
Vol 2 (5) ◽  
Author(s):  
Yuankai Wu ◽  
Huachun Tan ◽  
Jiankun Peng ◽  
Bin Ran

Car following (CF) models are an appealing research area because they fundamentally describe longitudinal interactions of vehicles on the road, and contribute significantly to an understanding of traffic flow. There is an emerging trend to use data-driven method to build CF models. One challenge to the data-driven CF models is their capability to achieve optimal longitudinal driven behavior because a lot of bad driving behaviors will be learnt from human drivers by the supervised learning manner. In this study, by utilizing the deep reinforcement learning (DRL) techniques trust region policy optimization (TRPO), a DRL based CF model for electric vehicle (EV) is built. The proposed CF model can learn optimal driving behavior by itself in simulation. The experiments on following standard driving cycle show that the DRL model outperforms the traditional CF model in terms of electricity consumption.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Tong Zhu ◽  
Xiaohu Li ◽  
Wei Fan ◽  
Changshuai Wang ◽  
Haoxue Liu ◽  
...  

Work zone areas are frequent congested sections considered as the freeway bottleneck. Connected and autonomous vehicle (CAV) trajectory optimization can improve the operating efficiency in bottleneck areas by harmonizing vehicles’ manipulations. This study presents a joint trajectory optimization of cooperative lane changing, merging, and car-following actions for CAV control at a local merging point together with upstream points. The multiagent reinforcement learning (MARL) method is applied in this system, with one agent providing a merging advisory service at the merging point and controlling the inner-lane vehicles’ headway for smooth outer-lane vehicle merging, while other agents provide lane-changing advisory services at advance lane-changing points to control how vehicles make lane changes in advance and perform corresponding headway adjustment, similar to and jointly with the merging advisory service. Uniting all agents, the coordination graph (CG) method is applied to seek the global optimum, overcoming the exponential growth problem in MARL. Using MATLAB and the VISSIM COM interface, an online simulation platform is established. The simulation results show that MARL is effective for online computation with in-timing response. More importantly, comparisons of the results obtained in various scenarios demonstrate that the proposed system obtained smoother vehicle trajectories in all controlled sections, rather than only in the merging area, indicating that it can achieve better traffic conditions in freeway work zone areas.


Author(s):  
Hongbo Gao ◽  
Guanya Shi ◽  
Kelong Wang ◽  
Guotao Xie ◽  
Yuchao Liu

Purpose Over the past decades, there has been significant research effort dedicated to the development of autonomous vehicles. The decision-making system, which is responsible for driving safety, is one of the most important technologies for autonomous vehicles. The purpose of this study is the use of an intensive learning method combined with car-following data by a driving simulator to obtain an explanatory learning following algorithm and establish an anthropomorphic car-following model. Design/methodology/approach This paper proposed car-following method based on reinforcement learning for autonomous vehicles decision-making. An approximator is used to approximate the value function by determining state space, action space and state transition relationship. A gradient descent method is used to solve the parameter. Findings The effect of car-following on certain driving styles is initially achieved through the simulation of step conditions. The effect of car-following initially proves that the reinforcement learning system is more adaptive to car following and that it has certain explanatory and stability based on the explicit calculation of R. Originality/value The simulation results show that the car-following method based on reinforcement learning for autonomous vehicle decision-making realizes reliable car-following decision-making and has the advantages of simple sample, small amount of data, simple algorithm and good robustness.


2018 ◽  
Vol 15 (6) ◽  
pp. 172988141881716 ◽  
Author(s):  
Hongbo Gao ◽  
Guanya Shi ◽  
Guotao Xie ◽  
Bo Cheng

There are still some problems need to be solved though there are a lot of achievements in the fields of automatic driving. One of those problems is the difficulty of designing a car-following decision-making system for complex traffic conditions. In recent years, reinforcement learning shows the potential in solving sequential decision optimization problems. In this article, we establish the reward function R of each driver data based on the inverse reinforcement learning algorithm, and r visualization is carried out, and then driving characteristics and following strategies are analyzed. At last, we show the efficiency of the proposed method by simulation in a highway environment.


Sign in / Sign up

Export Citation Format

Share Document