Neural Network Based Lane Change Trajectory Prediction in Autonomous Vehicles

Author(s):  
Ranjeet Singh Tomar ◽  
Shekhar Verma
Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5354
Author(s):  
Eunsan Jo ◽  
Myoungho Sunwoo ◽  
Minchul Lee

Predicting the trajectories of surrounding vehicles by considering their interactions is an essential ability for the functioning of autonomous vehicles. The subsequent movement of a vehicle is decided based on the multiple maneuvers of surrounding vehicles. Therefore, to predict the trajectories of surrounding vehicles, interactions among multiple maneuvers should be considered. Recent research has taken into account interactions that are difficult to express mathematically using data-driven deep learning methods. However, previous studies have only considered the interactions among observed trajectories due to subsequent maneuvers that are unobservable and numerous maneuver combinations. Thus, to consider the interaction among multiple maneuvers, this paper proposes a hierarchical graph neural network. The proposed hierarchical model approximately predicts the multiple maneuvers of vehicles and considers the interaction among the maneuvers by representing their relationships in a graph structure. The proposed method was evaluated using a publicly available dataset and a real driving dataset. Compared with previous methods, the results of the proposed method exhibited better prediction performance in highly interactive situations.


2021 ◽  
Vol 7 (4) ◽  
pp. 61
Author(s):  
David Urban ◽  
Alice Caplier

As difficult vision-based tasks like object detection and monocular depth estimation are making their way in real-time applications and as more light weighted solutions for autonomous vehicles navigation systems are emerging, obstacle detection and collision prediction are two very challenging tasks for small embedded devices like drones. We propose a novel light weighted and time-efficient vision-based solution to predict Time-to-Collision from a monocular video camera embedded in a smartglasses device as a module of a navigation system for visually impaired pedestrians. It consists of two modules: a static data extractor made of a convolutional neural network to predict the obstacle position and distance and a dynamic data extractor that stacks the obstacle data from multiple frames and predicts the Time-to-Collision with a simple fully connected neural network. This paper focuses on the Time-to-Collision network’s ability to adapt to new sceneries with different types of obstacles with supervised learning.


2021 ◽  
Vol 18 (3) ◽  
pp. 172988142110195
Author(s):  
Sorin Grigorescu ◽  
Cosmin Ginerica ◽  
Mihai Zaha ◽  
Gigel Macesanu ◽  
Bogdan Trasnea

In this article, we introduce a learning-based vision dynamics approach to nonlinear model predictive control (NMPC) for autonomous vehicles, coined learning-based vision dynamics (LVD) NMPC. LVD-NMPC uses an a-priori process model and a learned vision dynamics model used to calculate the dynamics of the driving scene, the controlled system’s desired state trajectory, and the weighting gains of the quadratic cost function optimized by a constrained predictive controller. The vision system is defined as a deep neural network designed to estimate the dynamics of the image scene. The input is based on historic sequences of sensory observations and vehicle states, integrated by an augmented memory component. Deep Q-learning is used to train the deep network, which once trained can also be used to calculate the desired trajectory of the vehicle. We evaluate LVD-NMPC against a baseline dynamic window approach (DWA) path planning executed using standard NMPC and against the PilotNet neural network. Performance is measured in our simulation environment GridSim, on a real-world 1:8 scaled model car as well as on a real size autonomous test vehicle and the nuScenes computer vision dataset.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1523
Author(s):  
Nikita Smirnov ◽  
Yuzhou Liu ◽  
Aso Validi ◽  
Walter Morales-Alvarez ◽  
Cristina Olaverri-Monreal

Autonomous vehicles are expected to display human-like behavior, at least to the extent that their decisions can be intuitively understood by other road users. If this is not the case, the coexistence of manual and autonomous vehicles in a mixed environment might affect road user interactions negatively and might jeopardize road safety. To this end, it is highly important to design algorithms that are capable of analyzing human decision-making processes and of reproducing them. In this context, lane-change maneuvers have been studied extensively. However, not all potential scenarios have been considered, since most works have focused on highway rather than urban scenarios. We contribute to the field of research by investigating a particular urban traffic scenario in which an autonomous vehicle needs to determine the level of cooperation of the vehicles in the adjacent lane in order to proceed with a lane change. To this end, we present a game theory-based decision-making model for lane changing in congested urban intersections. The model takes as input driving-related parameters related to vehicles in the intersection before they come to a complete stop. We validated the model by relying on the Co-AutoSim simulator. We compared the prediction model outcomes with actual participant decisions, i.e., whether they allowed the autonomous vehicle to drive in front of them. The results are promising, with the prediction accuracy being 100% in all of the cases in which the participants allowed the lane change and 83.3% in the other cases. The false predictions were due to delays in resuming driving after the traffic light turned green.


Author(s):  
Kangqiang Ouyang ◽  
Yong Wang ◽  
Yanqiang Li ◽  
Yunhai Zhu

Author(s):  
Isaac Oyeyemi Olayode ◽  
Alessandro Severino ◽  
Tiziana Campisi ◽  
Lagouge Kwanda Tartibu

In the last decades, the Italian road transport system has been characterized by severe and consistent traffic congestion and in particular Rome is one of the Italian cities most affected by this problem. In this study, a LevenbergMarquardt (LM) artificial neural network heuristic model was used to predict the traffic flow of non-autonomous vehicles. Traffic datasets were collected using both inductive loop detectors and video cameras as acquisition systems and selecting some parameters including vehicle speed, time of day, traffic volume and number of vehicles. The model showed a training, test and regression value (R2) of 0.99892, 0.99615 and 0.99714 respectively. The results of this research add to the growing body of literature on traffic flow modelling and help urban planners and traffic managers in terms of the traffic control and the provision of convenient travel routes for pedestrians and motorists.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Cheng-Jian Lin ◽  
Chun-Hui Lin ◽  
Shyh-Hau Wang

Deep learning has accomplished huge success in computer vision applications such as self-driving vehicles, facial recognition, and controlling robots. A growing need for deploying systems on resource-limited or resource-constrained environments such as smart cameras, autonomous vehicles, robots, smartphones, and smart wearable devices drives one of the current mainstream developments of convolutional neural networks: reducing model complexity but maintaining fine accuracy. In this study, the proposed efficient light convolutional neural network (ELNet) comprises three convolutional modules which perform ELNet using fewer computations, which is able to be implemented in resource-constrained hardware equipment. The classification task using CIFAR-10 and CIFAR-100 datasets was used to verify the model performance. According to the experimental results, ELNet reached 92.3% and 69%, respectively, in CIFAR-10 and CIFAR-100 datasets; moreover, ELNet effectively lowered the computational complexity and parameters required in comparison with other CNN architectures.


Sign in / Sign up

Export Citation Format

Share Document