scholarly journals End-to-End Deep Neural Network Architectures for Speed and Steering Wheel Angle Prediction in Autonomous Driving

Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1266
Author(s):  
Pedro J. Navarro ◽  
Leanne Miller ◽  
Francisca Rosique ◽  
Carlos Fernández-Isla ◽  
Alberto Gila-Navarro

The complex decision-making systems used for autonomous vehicles or advanced driver-assistance systems (ADAS) are being replaced by end-to-end (e2e) architectures based on deep-neural-networks (DNN). DNNs can learn complex driving actions from datasets containing thousands of images and data obtained from the vehicle perception system. This work presents the classification, design and implementation of six e2e architectures capable of generating the driving actions of speed and steering wheel angle directly on the vehicle control elements. The work details the design stages and optimization process of the convolutional networks to develop six e2e architectures. In the metric analysis the architectures have been tested with different data sources from the vehicle, such as images, XYZ accelerations and XYZ angular speeds. The best results were obtained with a mixed data e2e architecture that used front images from the vehicle and angular speeds to predict the speed and steering wheel angle with a mean error of 1.06%. An exhaustive optimization process of the convolutional blocks has demonstrated that it is possible to design lightweight e2e architectures with high performance more suitable for the final implementation in autonomous driving.

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5443
Author(s):  
Hongyu Hu ◽  
Ziyang Lu ◽  
Qi Wang ◽  
Chengyuan Zheng

Changing lanes while driving requires coordinating the lateral and longitudinal controls of a vehicle, considering its running state and the surrounding environment. Although the existing rule-based automated lane-changing method is simple, it is unsuitable for unpredictable scenarios encountered in practice. Therefore, using a deep deterministic policy gradient (DDPG) algorithm, we propose an end-to-end method for automated lane changing based on lidar data. The distance state information of the lane boundary and the surrounding vehicles obtained by the agent in a simulation environment is denoted as the state space for an automated lane-change problem based on reinforcement learning. The steering wheel angle and longitudinal acceleration are used as the action space, and both the state and action spaces are continuous. In terms of the reward function, avoiding collision and setting different expected lane-changing distances that represent different driving styles are considered for security, and the angular velocity of the steering wheel and jerk are considered for comfort. The minimum speed limit for lane changing and the control of the agent for a quick lane change are considered for efficiency. For a one-way two-lane road, a visual simulation environment scene is constructed using Pyglet. By comparing the lane-changing process tracks of two driving styles in a simplified traffic flow scene, we study the influence of driving style on the lane-changing process and lane-changing time. Through the training and adjustment of the combined lateral and longitudinal control of autonomous vehicles with different driving styles in complex traffic scenes, the vehicles could complete a series of driving tasks while considering driving-style differences. The experimental results show that autonomous vehicles can reflect the differences in the driving styles at the time of lane change at the same speed. Under the combined lateral and longitudinal control, the autonomous vehicles exhibit good robustness to different speeds and traffic density in different road sections. Thus, autonomous vehicles trained using the proposed method can learn an automated lane-changing policy while considering safety, comfort, and efficiency.


Electronics ◽  
2021 ◽  
Vol 10 (19) ◽  
pp. 2405
Author(s):  
Heung-Gu Lee ◽  
Dong-Hyun Kang ◽  
Deok-Hwan Kim

Currently, the existing vehicle-centric semi-autonomous driving modules do not consider the driver’s situation and emotions. In an autonomous driving environment, when changing to manual driving, human–machine interface and advanced driver assistance systems (ADAS) are essential to assist vehicle driving. This study proposes a human–machine interface that considers the driver’s situation and emotions to enhance the ADAS. A 1D convolutional neural network model based on multimodal bio-signals is used and applied to control semi-autonomous vehicles. The possibility of semi-autonomous driving is confirmed by classifying four driving scenarios and controlling the speed of the vehicle. In the experiment, by using a driving simulator and hardware-in-the-loop simulation equipment, we confirm that the response speed of the driving assistance system is 351.75 ms and the system recognizes four scenarios and eight emotions through bio-signal data.


2020 ◽  
Vol 10 (12) ◽  
pp. 4301
Author(s):  
Sergio Sánchez-Carballido ◽  
Orti Senderos ◽  
Marcos Nieto ◽  
Oihana Otaegui

An innovative solution named Annotation as a Service (AaaS) has been specifically designed to integrate heterogeneous video annotation workflows into containers and take advantage of a cloud native highly scalable and reliable design based on Kubernetes workloads. Using the AaaS as a foundation, the execution of automatic video annotation workflows is addressed in the broader context of a semi-automatic video annotation business logic for ground truth generation for Autonomous Driving (AD) and Advanced Driver Assistance Systems (ADAS). The document presents design decisions, innovative developments, and tests conducted to provide scalability to this cloud-native ecosystem for semi-automatic annotation. The solution has proven to be efficient and resilient on an AD/ADAS scale, specifically in an experiment with 25 TB of input data to annotate, 4000 concurrent annotation jobs, and 32 worker nodes forming a high performance computing cluster with a total of 512 cores, and 2048 GB of RAM. Automatic pre-annotations with the proposed strategy reduce the time of human participation in the annotation up to 80% maximum and 60% on average.


2021 ◽  
Vol 11 (16) ◽  
pp. 7225
Author(s):  
Eugenio Tramacere ◽  
Sara Luciani ◽  
Stefano Feraco ◽  
Angelo Bonfitto ◽  
Nicola Amati

Self-driving vehicles have experienced an increase in research interest in the last decades. Nevertheless, fully autonomous vehicles are still far from being a common means of transport. This paper presents the design and experimental validation of a processor-in-the-loop (PIL) architecture for an autonomous sports car. The considered vehicle is an all-wheel drive full-electric single-seater prototype. The retained PIL architecture includes all the modules required for autonomous driving at system level: environment perception, trajectory planning, and control. Specifically, the perception pipeline exploits obstacle detection algorithms based on Artificial Intelligence (AI), and the trajectory planning is based on a modified Rapidly-exploring Random Tree (RRT) algorithm based on Dubins curves, while the vehicle is controlled via a Model Predictive Control (MPC) strategy. The considered PIL layout is implemented firstly using a low-cost card-sized computer for fast code verification purposes. Furthermore, the proposed PIL architecture is compared in terms of performance to an alternative PIL using high-performance real-time target computing machine. Both PIL architectures exploit User Datagram Protocol (UDP) protocol to properly communicate with a personal computer. The latter PIL architecture is validated in real-time using experimental data. Moreover, they are also validated with respect to the general autonomous pipeline that runs in parallel on the personal computer during numerical simulation.


2020 ◽  
Author(s):  
Huihui Pan ◽  
Weichao Sun ◽  
Qiming Sun ◽  
Huijun Gao

Abstract Environmental perception is one of the key technologies to realize autonomous vehicles. Autonomous vehicles are often equipped with multiple sensors to form a multi-source environmental perception system. Those sensors are very sensitive to light or background conditions, which will introduce a variety of global and local fault signals that bring great safety risks to autonomous driving system during long-term running. In this paper, a real-time data fusion network with fault diagnosis and fault tolerance mechanism is designed. By introducing prior features to realize the lightweight of the backbone network, the features of the input data can be extracted in real time accurately. Through the temporal and spatial correlation between sensor data, the sensor redundancy is utilized to diagnose the local and global condence of sensor data in real time, eliminate the fault data, and ensure the accuracy and reliability of data fusion. Experiments show that the network achieves the state-of-the-art results in speed and accuracy, and can accurately detect the location of the target when some sensors are out of focus or out of order.


2020 ◽  
Author(s):  
Robert Patton ◽  
Shang Gao ◽  
Spencer Paulissen ◽  
Nicholas Haas ◽  
Brian Jewell ◽  
...  

Author(s):  
Sai Rajeev Devaragudi ◽  
Bo Chen

Abstract This paper presents a Model Predictive Control (MPC) approach for longitudinal and lateral control of autonomous vehicles with a real-time local path planning algorithm. A heuristic graph search method (A* algorithm) combined with piecewise Bezier curve generation is implemented for obstacle avoidance in autonomous driving applications. Constant time headway control is implemented for a longitudinal motion to track lead vehicles and maintain a constant time gap. MPC is used to control the steering angle and the tractive force of the autonomous vehicle. Furthermore, a new method of developing Advanced Driver Assistance Systems (ADAS) algorithms and vehicle controllers using Model-In-the-Loop (MIL) testing is explored with the use of PreScan®. With PreScan®, various traffic scenarios are modeled and the sensor data are simulated by using physics-based sensor models, which are fed to the controller for data processing and motion planning. Obstacle detection and collision avoidance are demonstrated using the presented MPC controller.


2021 ◽  
Vol 11 (5) ◽  
pp. 2197
Author(s):  
Stefania Santini ◽  
Nicola Albarella ◽  
Vincenzo Maria Arricale ◽  
Renato Brancati ◽  
Aleksandr Sakhnevych

In recent years, autonomous vehicles and advanced driver assistance systems have drawn a great deal of attention from both research and industry, because of their demonstrated benefit in reducing the rate of accidents or, at least, their severity. The main flaw of this system is related to the poor performances in adverse environmental conditions, due to the reduction of friction, which is mainly related to the state of the road. In this paper, a new model-based technique is proposed for real-time road friction estimation in different environmental conditions. The proposed technique is based on both bicycle model to evaluate the state of the vehicle and a tire Magic Formula model based on a slip-slope approach to evaluate the potential friction. The results, in terms of the maximum achievable grip value, have been involved in autonomous driving vehicle-following maneuvers, as well as the operating condition of the vehicle at which such grip value can be reached. The effectiveness of the proposed approach is disclosed via an extensive numerical analysis covering a wide range of environmental, traffic, and vehicle kinematic conditions. Results confirm the ability of the approach to properly automatically adapting the inter-vehicle space gap and to avoiding collisions also in adverse road conditions (e.g., ice, heavy rain).


2021 ◽  
Vol 2093 (1) ◽  
pp. 012032
Author(s):  
Peide Wang

Abstract With the improvement of vehicles automation, autonomous vehicles become one of the research hotspots. Key technologies of autonomous vehicles mainly include perception, decision-making, and control. Among them, the environmental perception system, which can convert the physical world’s information collection into digital signals, is the basis of the hardware architecture of autonomous vehicles. At present, there are two major schools in the field of environmental perception: camera which is dominated by computer vision and LiDAR. This paper analyzes and compares the two majors schools in the field of environmental perception and concludes that multi-sensor fusion is the solution for future autonomous driving.


Author(s):  
Sandra Boric ◽  
Edgar Schiebel ◽  
Christian Schlögl ◽  
Michaela Hildebrandt ◽  
Christina Hofer ◽  
...  

Autonomous driving has become an increasingly relevant issue for policymakers, the industry, service providers, infrastructure companies, and science. This study shows how bibliometrics can be used to identify the major technological aspects of an emerging research field such as autonomous driving. We examine the most influential publications and identify research fronts of scientific activities until 2017 based on a bibliometric literature analysis. Using the science mapping approach, publications in the research field of autonomous driving were retrieved from Web of Science and then structured using the bibliometric software BibTechMon by the AIT (Austrian Institute of Technology). At the time of our analysis, we identified four research fronts in the field of autonomous driving: (I) Autonomous Vehicles and Infrastructure, (II) Driver Assistance Systems, (III) Autonomous Mobile Robots, and (IV) IntraFace, i.e., automated facial image analysis. Researchers were working extensively on technologies that support the navigation and collection of data. Our analysis indicates that research was moving towards autonomous navigation and infrastructure in the urban environment. A noticeable number of publications focused on technologies for environment detection in automated vehicles. Still, research pointed at the technological challenges to make automated driving safe.


Sign in / Sign up

Export Citation Format

Share Document