scholarly journals Autonomous Vehicle Fuel Economy Optimization with Deep Reinforcement Learning

Electronics ◽  
2020 ◽  
Vol 9 (11) ◽  
pp. 1911
Author(s):  
Hyunkun Kim ◽  
Hyeongoo Pyeon ◽  
Jong Sool Park ◽  
Jin Young Hwang ◽  
Sejoon Lim

The ever-increasing number of vehicles on the road puts pressure on car manufacturers to make their car fuel-efficient. With autonomous vehicles, we can find new strategies to optimize fuels. We propose a reinforcement learning algorithm that trains deep neural networks to generate a fuel-efficient velocity profile for autonomous vehicles given road altitude information for the planned trip. Using a highly accurate industry-accepted fuel economy simulation program, we train our deep neural network model. We developed a technique for adapting the heterogeneous simulation program on top of an open-source deep learning framework, and reduced dimension of the problem output with suitable parameterization to train the neural network much faster. The learned model combined with reinforcement learning-based strategy generation effectively generated the velocity profile so that autonomous vehicles can follow to control itself in a fuel efficient way. We evaluate our algorithm’s performance using the fuel economy simulation program for various altitude profiles. We also demonstrate that our method can teach neural networks to generate useful strategies to increase fuel economy even on unseen roads. Our method improved fuel economy by 8% compared to a simple grid search approach.

Author(s):  
László Orgován ◽  
Tamás Bécsi ◽  
Szilárd Aradi

Autonomous vehicles or self-driving cars are prevalent nowadays, many vehicle manufacturers, and other tech companies are trying to develop autonomous vehicles. One major goal of the self-driving algorithms is to perform manoeuvres safely, even when some anomaly arises. To solve these kinds of complex issues, Artificial Intelligence and Machine Learning methods are used. One of these motion planning problems is when the tires lose their grip on the road, an autonomous vehicle should handle this situation. Thus the paper provides an Autonomous Drifting algorithm using Reinforcement Learning. The algorithm is based on a model-free learning algorithm, Twin Delayed Deep Deterministic Policy Gradients (TD3). The model is trained on six different tracks in a simulator, which is developed specifically for autonomous driving systems; namely CARLA.


Author(s):  
Jay Rodge ◽  
Swati Jaiswal

Deep learning and Artificial intelligence (AI) have been trending these days due to the capability and state-of-the-art results that they provide. They have replaced some highly skilled professionals with neural network-powered AI, also known as deep learning algorithms. Deep learning majorly works on neural networks. This chapter discusses about the working of a neuron, which is a unit component of neural network. There are numerous techniques that can be incorporated while designing a neural network, such as activation functions, training, etc. to improve its features, which will be explained in detail. It has some challenges such as overfitting, which are difficult to neglect but can be overcome using proper techniques and steps that have been discussed. The chapter will help the academician, researchers, and practitioners to further investigate the associated area of deep learning and its applications in the autonomous vehicle industry.


Electronics ◽  
2019 ◽  
Vol 8 (12) ◽  
pp. 1536 ◽  
Author(s):  
Laura García Cuenca ◽  
Enrique Puertas ◽  
Javier Fernandez Andrés ◽  
Nourdine Aliane

Navigating roundabouts is a complex driving scenario for both manual and autonomous vehicles. This paper proposes an approach based on the use of the Q-learning algorithm to train an autonomous vehicle agent to learn how to appropriately navigate roundabouts. The proposed learning algorithm is implemented using the CARLA simulation environment. Several simulations are performed to train the algorithm in two scenarios: navigating a roundabout with and without surrounding traffic. The results illustrate that the Q-learning-algorithm-based vehicle agent is able to learn smooth and efficient driving to perform maneuvers within roundabouts.


Author(s):  
Xing Xu ◽  
Minglei Li ◽  
Feng Wang ◽  
Ju Xie ◽  
Xiaohan Wu ◽  
...  

A human-like trajectory could give a safe and comfortable feeling for the occupants in an autonomous vehicle especially in corners. The research of this paper focuses on planning a human-like trajectory along a section road on a test track using optimal control method that could reflect natural driving behaviour considering the sense of natural and comfortable for the passengers, which could improve the acceptability of driverless vehicles in the future. A mass point vehicle dynamic model is modelled in the curvilinear coordinate system, then an optimal trajectory is generated by using an optimal control method. The optimal control problem is formulated and then solved by using the Matlab tool GPOPS-II. Trials are carried out on a test track, and the tested data are collected and processed, then the trajectory data in different corners are obtained. Different TLCs calculations are derived and applied to different track sections. After that, the human driver’s trajectories and the optimal line are compared to see the correlation using TLC methods. The results show that the optimal trajectory shows a similar trend with human’s trajectories to some extent when driving through a corner although it is not so perfectly aligned with the tested trajectories, which could conform with people’s driving intuition and improve the occupants’ comfort when driving in a corner. This could improve the acceptability of AVs in the automotive market in the future. The driver tends to move to the outside of the lane gradually after passing the apex when driving in corners on the road with hard-lines on both sides.


2021 ◽  
Vol 2 (1) ◽  
pp. 1-25
Author(s):  
Yongsen Ma ◽  
Sheheryar Arshad ◽  
Swetha Muniraju ◽  
Eric Torkildson ◽  
Enrico Rantala ◽  
...  

In recent years, Channel State Information (CSI) measured by WiFi is widely used for human activity recognition. In this article, we propose a deep learning design for location- and person-independent activity recognition with WiFi. The proposed design consists of three Deep Neural Networks (DNNs): a 2D Convolutional Neural Network (CNN) as the recognition algorithm, a 1D CNN as the state machine, and a reinforcement learning agent for neural architecture search. The recognition algorithm learns location- and person-independent features from different perspectives of CSI data. The state machine learns temporal dependency information from history classification results. The reinforcement learning agent optimizes the neural architecture of the recognition algorithm using a Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM). The proposed design is evaluated in a lab environment with different WiFi device locations, antenna orientations, sitting/standing/walking locations/orientations, and multiple persons. The proposed design has 97% average accuracy when testing devices and persons are not seen during training. The proposed design is also evaluated by two public datasets with accuracy of 80% and 83%. The proposed design needs very little human efforts for ground truth labeling, feature engineering, signal processing, and tuning of learning parameters and hyperparameters.


2021 ◽  
Vol 11 (4) ◽  
pp. 1514 ◽  
Author(s):  
Quang-Duy Tran ◽  
Sang-Hoon Bae

To reduce the impact of congestion, it is necessary to improve our overall understanding of the influence of the autonomous vehicle. Recently, deep reinforcement learning has become an effective means of solving complex control tasks. Accordingly, we show an advanced deep reinforcement learning that investigates how the leading autonomous vehicles affect the urban network under a mixed-traffic environment. We also suggest a set of hyperparameters for achieving better performance. Firstly, we feed a set of hyperparameters into our deep reinforcement learning agents. Secondly, we investigate the leading autonomous vehicle experiment in the urban network with different autonomous vehicle penetration rates. Thirdly, the advantage of leading autonomous vehicles is evaluated using entire manual vehicle and leading manual vehicle experiments. Finally, the proximal policy optimization with a clipped objective is compared to the proximal policy optimization with an adaptive Kullback–Leibler penalty to verify the superiority of the proposed hyperparameter. We demonstrate that full automation traffic increased the average speed 1.27 times greater compared with the entire manual vehicle experiment. Our proposed method becomes significantly more effective at a higher autonomous vehicle penetration rate. Furthermore, the leading autonomous vehicles could help to mitigate traffic congestion.


2021 ◽  
Vol 11 (7) ◽  
pp. 2925
Author(s):  
Edgar Cortés Gallardo Medina ◽  
Victor Miguel Velazquez Espitia ◽  
Daniela Chípuli Silva ◽  
Sebastián Fernández Ruiz de las Cuevas ◽  
Marco Palacios Hirata ◽  
...  

Autonomous vehicles are increasingly becoming a necessary trend towards building the smart cities of the future. Numerous proposals have been presented in recent years to tackle particular aspects of the working pipeline towards creating a functional end-to-end system, such as object detection, tracking, path planning, sentiment or intent detection, amongst others. Nevertheless, few efforts have been made to systematically compile all of these systems into a single proposal that also considers the real challenges these systems will have on the road, such as real-time computation, hardware capabilities, etc. This paper reviews the latest techniques towards creating our own end-to-end autonomous vehicle system, considering the state-of-the-art methods on object detection, and the possible incorporation of distributed systems and parallelization to deploy these methods. Our findings show that while techniques such as convolutional neural networks, recurrent neural networks, and long short-term memory can effectively handle the initial detection and path planning tasks, more efforts are required to implement cloud computing to reduce the computational time that these methods demand. Additionally, we have mapped different strategies to handle the parallelization task, both within and between the networks.


2021 ◽  
Vol 10 (1) ◽  
pp. 21
Author(s):  
Omar Nassef ◽  
Toktam Mahmoodi ◽  
Foivos Michelinakis ◽  
Kashif Mahmood ◽  
Ahmed Elmokashfi

This paper presents a data driven framework for performance optimisation of Narrow-Band IoT user equipment. The proposed framework is an edge micro-service that suggests one-time configurations to user equipment communicating with a base station. Suggested configurations are delivered from a Configuration Advocate, to improve energy consumption, delay, throughput or a combination of those metrics, depending on the user-end device and the application. Reinforcement learning utilising gradient descent and genetic algorithm is adopted synchronously with machine and deep learning algorithms to predict the environmental states and suggest an optimal configuration. The results highlight the adaptability of the Deep Neural Network in the prediction of intermediary environmental states, additionally the results present superior performance of the genetic reinforcement learning algorithm regarding its performance optimisation.


Sign in / Sign up

Export Citation Format

Share Document