scholarly journals Meta Reinforcement Learning-Based Lane Change Strategy for Autonomous Vehicles

Author(s):  
Fei Ye ◽  
Pin Wang ◽  
Ching-Yao Chan ◽  
Jiucai Zhang
Electronics ◽  
2019 ◽  
Vol 8 (5) ◽  
pp. 543 ◽  
Author(s):  
HongIl An ◽  
Jae-il Jung

Lane changing systems have consistently received attention in the fields of vehicular communication and autonomous vehicles. In this paper, we propose a lane change system that combines deep reinforcement learning and vehicular communication. A host vehicle, trying to change lanes, receives the state information of the host vehicle and a remote vehicle that are both equipped with vehicular communication devices. A deep deterministic policy gradient learning algorithm in the host vehicle determines the high-level action of the host vehicle from the state information. The proposed system learns straight-line driving and collision avoidance actions without vehicle dynamics knowledge. Finally, we consider the update period for the state information from the host and remote vehicles.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1523
Author(s):  
Nikita Smirnov ◽  
Yuzhou Liu ◽  
Aso Validi ◽  
Walter Morales-Alvarez ◽  
Cristina Olaverri-Monreal

Autonomous vehicles are expected to display human-like behavior, at least to the extent that their decisions can be intuitively understood by other road users. If this is not the case, the coexistence of manual and autonomous vehicles in a mixed environment might affect road user interactions negatively and might jeopardize road safety. To this end, it is highly important to design algorithms that are capable of analyzing human decision-making processes and of reproducing them. In this context, lane-change maneuvers have been studied extensively. However, not all potential scenarios have been considered, since most works have focused on highway rather than urban scenarios. We contribute to the field of research by investigating a particular urban traffic scenario in which an autonomous vehicle needs to determine the level of cooperation of the vehicles in the adjacent lane in order to proceed with a lane change. To this end, we present a game theory-based decision-making model for lane changing in congested urban intersections. The model takes as input driving-related parameters related to vehicles in the intersection before they come to a complete stop. We validated the model by relying on the Co-AutoSim simulator. We compared the prediction model outcomes with actual participant decisions, i.e., whether they allowed the autonomous vehicle to drive in front of them. The results are promising, with the prediction accuracy being 100% in all of the cases in which the participants allowed the lane change and 83.3% in the other cases. The false predictions were due to delays in resuming driving after the traffic light turned green.


2021 ◽  
Vol 11 (4) ◽  
pp. 1514 ◽  
Author(s):  
Quang-Duy Tran ◽  
Sang-Hoon Bae

To reduce the impact of congestion, it is necessary to improve our overall understanding of the influence of the autonomous vehicle. Recently, deep reinforcement learning has become an effective means of solving complex control tasks. Accordingly, we show an advanced deep reinforcement learning that investigates how the leading autonomous vehicles affect the urban network under a mixed-traffic environment. We also suggest a set of hyperparameters for achieving better performance. Firstly, we feed a set of hyperparameters into our deep reinforcement learning agents. Secondly, we investigate the leading autonomous vehicle experiment in the urban network with different autonomous vehicle penetration rates. Thirdly, the advantage of leading autonomous vehicles is evaluated using entire manual vehicle and leading manual vehicle experiments. Finally, the proximal policy optimization with a clipped objective is compared to the proximal policy optimization with an adaptive Kullback–Leibler penalty to verify the superiority of the proposed hyperparameter. We demonstrate that full automation traffic increased the average speed 1.27 times greater compared with the entire manual vehicle experiment. Our proposed method becomes significantly more effective at a higher autonomous vehicle penetration rate. Furthermore, the leading autonomous vehicles could help to mitigate traffic congestion.


Author(s):  
Kangqiang Ouyang ◽  
Yong Wang ◽  
Yanqiang Li ◽  
Yunhai Zhu

Author(s):  
Óscar Pérez-Gil ◽  
Rafael Barea ◽  
Elena López-Guillén ◽  
Luis M. Bergasa ◽  
Carlos Gómez-Huélamo ◽  
...  

AbstractNowadays, Artificial Intelligence (AI) is growing by leaps and bounds in almost all fields of technology, and Autonomous Vehicles (AV) research is one more of them. This paper proposes the using of algorithms based on Deep Learning (DL) in the control layer of an autonomous vehicle. More specifically, Deep Reinforcement Learning (DRL) algorithms such as Deep Q-Network (DQN) and Deep Deterministic Policy Gradient (DDPG) are implemented in order to compare results between them. The aim of this work is to obtain a trained model, applying a DRL algorithm, able of sending control commands to the vehicle to navigate properly and efficiently following a determined route. In addition, for each of the algorithms, several agents are presented as a solution, so that each of these agents uses different data sources to achieve the vehicle control commands. For this purpose, an open-source simulator such as CARLA is used, providing to the system with the ability to perform a multitude of tests without any risk into an hyper-realistic urban simulation environment, something that is unthinkable in the real world. The results obtained show that both DQN and DDPG reach the goal, but DDPG obtains a better performance. DDPG perfoms trajectories very similar to classic controller as LQR. In both cases RMSE is lower than 0.1m following trajectories with a range 180-700m. To conclude, some conclusions and future works are commented.


Sign in / Sign up

Export Citation Format

Share Document