Imitation Learning for Autonomous Driving Based on Convolutional and Recurrent Neural Networks

Author(s):  
Chunling Du ◽  
Zhenbiao Wang ◽  
Andrew Alexander Malcolm ◽  
Choon Lim Ho
2021 ◽  
Vol 9 (5) ◽  
pp. 33-43
Author(s):  
Ashraf Nabil ◽  
Ayman Kassem

Autonomous Driving is one of the difficult problems faced the automotive applications. Nowadays, it is restricted due to the presence of some laws that prevent cars from being fully autonomous for the fear of accidents occurrence. Researchers try to improve the accuracy and safety of their models with the aim of having a strong push against these restricted Laws. Autonomous driving is a sought-after solution which isn’t easily solved by classical approaches. Deep Learning is considered as a strong Artificial Intelligence paradigm which can teach machines how to behave in difficult situations. It proved its success in many differ domains, but it still has sometime in the automotive applications. The presented work will use the end-to-end deep machine learning field in order to reach to our goal of having Full Autonomous Driving Vehicle that can behave correctly in different scenarios. CARLA simulator will be used to learn and test the deep neural networks. Results will show not only performance on CARLA’s simulator as an end-to-end solution for autonomous driving, but also how the same approach can be used on one of the most popular real datasets of automotive that includes camera images with the corresponding driver’s control action.


2019 ◽  
Vol 67 (7) ◽  
pp. 545-556 ◽  
Author(s):  
Mark Schutera ◽  
Stefan Elser ◽  
Jochen Abhau ◽  
Ralf Mikut ◽  
Markus Reischl

Abstract In autonomous driving, prediction tasks address complex spatio-temporal data. This article describes the examination of Recurrent Neural Networks (RNNs) for object trajectory prediction in the image space. The proposed methods enhance the performance and spatio-temporal prediction capabilities of Recurrent Neural Networks. Two different data augmentation strategies and a hyperparameter search are implemented for this purpose. A conventional data augmentation strategy and a Generative Adversarial Network (GAN) based strategy are analyzed with respect to their ability to close the generalization gap of Recurrent Neural Networks. The results are then discussed using single-object tracklets provided by the KITTI Tracking Dataset. This work demonstrates the benefits of augmenting spatio-temporal data with GANs.


Electronics ◽  
2021 ◽  
Vol 10 (24) ◽  
pp. 3113
Author(s):  
Javier Corrochano ◽  
Juan M. Alonso-Weber ◽  
María Paz Sesmero ◽  
Araceli Sanchis

There are various techniques to approach learning in autonomous driving; however, all of them suffer from some problems. In the case of imitation learning based on artificial neural networks, the system must learn to correctly identify the elements of the environment. In some cases, it takes a lot of effort to tag the images with the proper semantics. This is also relevant given the need to have very varied scenarios to train and to thus obtain an acceptable generalization capacity. In the present work, we propose a technique for automated semantic labeling. It is based on various learning phases using image superposition combining both scenarios with chromas and real indoor scenarios. This allows the generation of augmented datasets that facilitate the learning process. Further improvements by applying noise techniques are also studied. To carry out the validation, a small-scale car model is used that learns to automatically drive on a reduced circuit. A comparison with models that do not rely on semantic segmentation is also performed. The main contribution of our proposal is the possibility of generating datasets for real indoor scenarios with automatic semantic segmentation, without the need for endless human labeling tasks.


2020 ◽  
Author(s):  
Dean Sumner ◽  
Jiazhen He ◽  
Amol Thakkar ◽  
Ola Engkvist ◽  
Esben Jannik Bjerrum

<p>SMILES randomization, a form of data augmentation, has previously been shown to increase the performance of deep learning models compared to non-augmented baselines. Here, we propose a novel data augmentation method we call “Levenshtein augmentation” which considers local SMILES sub-sequence similarity between reactants and their respective products when creating training pairs. The performance of Levenshtein augmentation was tested using two state of the art models - transformer and sequence-to-sequence based recurrent neural networks with attention. Levenshtein augmentation demonstrated an increase performance over non-augmented, and conventionally SMILES randomization augmented data when used for training of baseline models. Furthermore, Levenshtein augmentation seemingly results in what we define as <i>attentional gain </i>– an enhancement in the pattern recognition capabilities of the underlying network to molecular motifs.</p>


Author(s):  
Faisal Ladhak ◽  
Ankur Gandhe ◽  
Markus Dreyer ◽  
Lambert Mathias ◽  
Ariya Rastrow ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document