scholarly journals Prediction of Steering Angle for Autonomous Vehicles Using Pre-Trained Neural Network

2021 ◽  
Vol 6 (5) ◽  
pp. 171-176
Author(s):  
Jonah Sokipriala

Autonomous driving is one promising research area that would not only revolutionize the transportation industry but would as well save thousands of lives. accurate correct Steering angle prediction plays a crucial role in the development of the autonomous vehicle .This research attempts to design a model that would be able to clone a drivers behavior using transfer learning from pretrained VGG16, the results showed that the model was able to use less training parameters and achieved a low mean squared error(MSE) of less than 2% without overfitting to the training set hence was able to drive on new road it was not trained on.

Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6733
Author(s):  
Min-Joong Kim ◽  
Sung-Hun Yu ◽  
Tong-Hyun Kim ◽  
Joo-Uk Kim ◽  
Young-Min Kim

Today, a lot of research on autonomous driving technology is being conducted, and various vehicles with autonomous driving functions, such as ACC (adaptive cruise control) are being released. The autonomous vehicle recognizes obstacles ahead by the fusion of data from various sensors, such as lidar and radar sensors, including camera sensors. As the number of vehicles equipped with such autonomous driving functions increases, securing safety and reliability is a big issue. Recently, Mobileye proposed the RSS (responsibility-sensitive safety) model, which is a white box mathematical model, to secure the safety of autonomous vehicles and clarify responsibility in the case of an accident. In this paper, a method of applying the RSS model to a variable focus function camera that can cover the recognition range of a lidar sensor and a radar sensor with a single camera sensor is considered. The variables of the RSS model suitable for the variable focus function camera were defined, the variable values were determined, and the safe distances for each velocity were derived by applying the determined variable values. In addition, as a result of considering the time required to obtain the data, and the time required to change the focal length of the camera, it was confirmed that the response time obtained using the derived safe distance was a valid result.


Author(s):  
Jacob Terry ◽  
Chris Bachmann

There is some understanding that autonomous vehicles will disrupt public sector policies and the existing transportation industry, but this disruption is often loosely defined and tends to ignore how it would affect governments financially. The primary objective of this paper is to quantify the short-term impact of introducing autonomous vehicles on government finances. The analysis focuses on eight Canadian governments, encompassing four government tiers. Public discourse and academic literature are used to generate nine predicted changes (forecast variables) in future adoption scenarios. Using the predicted rate of autonomous vehicle adoption, the remaining variables are converted into financial changes by combining them with government financial records, infrastructure inventory datasets, and project cost estimates. The results suggest that, while revenue impacts are fairly minimal, and mostly impact Canadian provinces, the cost of implementing the expected vehicle-to-infrastructure (V2I) communication upgrades could be expensive for governments with smaller populations, especially municipalities. The revenue analysis indicates the biggest shift is likely to be a loss in gas tax, which affects federal and provincial revenues, yet this share is relatively small compared with the size of these governments’ budgets. The expense analysis suggests that, although provinces have extensive road networks, the cost of upgrading all of their highways may not be unreasonable compared with their yearly revenue intake. On the other hand, municipalities would require substantial new funds to be able to make the same upgrades.


2021 ◽  
Author(s):  
Md Khairul Islam ◽  
Mst. Nilufa Yeasmin ◽  
Chetna Kaushal ◽  
Md Al Amin ◽  
Md Rakibul Islam ◽  
...  

Deep learning's rapid gains in automation are making it more popular in a variety of complex jobs. The self-driving object is an emerging technology that has the potential to transform the entire planet. The steering control of an automated item is critical to ensuring a safe and secure voyage. Consequently, in this study, we developed a methodology for predicting the steering angle only by looking at the front images of a vehicle. In addition, we used an Internet of Things-based system for collecting front images and steering angles. A Raspberry Pi (RP) camera is used in conjunction with a Raspberry Pi (RP) processing unit to capture images from vehicles, and the RP processing unit is used to collect the angles associated with each image. Apart from that, we've made use of deep learning-based algorithms such as VGG16, ResNet-152, DenseNet-201, and Nvidia's models, all of which were trained using labeled training data. Our models are End-to-End CNN models, which do not require extracting elements from data such as roads, lanes, or other objects before predicting steering angle. As a result of our comparative investigation, we can conclude that the Nvidia model's performance was satisfactory, with a Mean Squared Error (MSE) value of 0. But the Nvidia model outperforms the other pre-trained models, even though other models work well.<br>


2019 ◽  
Vol 9 (23) ◽  
pp. 5126 ◽  
Author(s):  
Betz ◽  
Heilmeier ◽  
Wischnewski ◽  
Stahl ◽  
Lienkamp

Since 2017, a research team from the Technical University of Munich has developed a software stack for autonomous driving. The software was used to participate in the Roborace Season Alpha Championship. The championship aims to achieve autonomous race cars competing with different software stacks against each other. In May 2019, during a software test in Modena, Italy, the greatest danger in autonomous driving became reality: A minor change in environmental influences led an extensively tested software to crash into a barrier at speed. Crashes with autonomous vehicles have happened before but a detailed explanation of why software failed and what part of the software was not working correctly is missing in research articles. In this paper we present a general method that can be used to display an autonomous vehicle disengagement to explain in detail what happened. This method is then used to display and explain the crash from Modena. Firstly a brief introduction into the modular software stack that was used in the Modena event, consisting of three individual parts—perception, planning, and control—is given. Furthermore, the circumstancescausing the crash are elaborated in detail. By presented and explaining in detail which softwarepart failed and contributed to the crash we can discuss further software improvements. As a result, we present necessary functions that need to be integrated in an autonomous driving software stack to prevent such a vehicle behavior causing a fatal crash. In addition we suggest an enhancement of the current disengagement reports for autonomous driving regarding a detailed explanation of the software part that was causing the disengagement. In the outlook of this paper we present two additional software functions for assessing the tire and control performance of the vehicle to enhance the autonomous.


2019 ◽  
Vol 07 (03) ◽  
pp. 183-194
Author(s):  
Yoan Espada ◽  
Nicolas Cuperlier ◽  
Guillaume Bresson ◽  
Olivier Romain

The navigation of autonomous vehicles is confronted to the problem of an efficient place recognition system which is able to handle outdoor environments on the long run. The current Simultaneous Localization and Mapping (SLAM) and place recognition solutions have limitations that prevent them from achieving the performances needed for autonomous driving. This paper suggests handling the problem from another perspective by taking inspiration from biological models. We propose a neural architecture for the localization of an autonomous vehicle based on a neurorobotic model of the place cells (PC) found in the hippocampus of mammals. This model is based on an attentional mechanism and only takes into account visual information from a mono-camera and the orientation information to self-localize. It has the advantage to work with low resolution camera without the need of calibration. It also does not need a long learning phase as it uses a one-shot learning system. Such a localization model has already been integrated in a robot control architecture which allows for successful navigation both in indoor and small outdoor environments. The contribution of this paper is to study how it passes the scale change by evaluating the performance of this model over much larger outdoor environments. Eight experiments using real data (image and orientation) grabbed by a moving vehicle are studied (coming from the KITTI odometry datasets and datasets taken with VEDECOM vehicles). Results show the strong adaptability to different kinds of environments of this bio-inspired model primarily developed for indoor navigation.


2021 ◽  
Vol 11 (9) ◽  
pp. 2331-2340 ◽  
Author(s):  
Bankole K. Fasanya ◽  
Abosede O. Gbenga-Akinbiola

Artificial Intelligence (AI) is a motivation for full usage of autonomous driving. Many have predicted that autonomous technology would significantly disrupt the transportation industry. This research examines how autonomous driving might impact and disrupt the ridesharing industry and their drivers. The hypothesis is that autonomous vehicles (AV) will negatively impact the ridesharing industry. To examine the full effects of this disruption, we researched current literature on driverless technology cars and the ridesharing industry. Factors examined include: current economics of drivers and vehicles, public perception and acceptance, technological readiness, collaborations, regulations, and liability. Key findings from a host of resources were tabulated to build a case for the proposed hypothesis. The results provide a more comprehensive timeline estimate, predicted $0.75 cost estimate per mile by 2040, and documented the collaboration figure among the players that shows the significant investments across different industries. This research shows that the ridesharing industry’s current business model is due for a significant disruption by autonomous driving capabilities. Drivers in the ridesharing industry might likely suffer the most, however not for at least another decade or so. There are many independent factors, which must be further scrutinized to develop a more comprehensive understanding as to the velocity of this disruption. Findings from this study would be applicable while evaluating the future of autonomous vehicles.


2021 ◽  
Author(s):  
Md Khairul Islam ◽  
Mst. Nilufa Yeasmin ◽  
Chetna Kaushal ◽  
Md Al Amin ◽  
Md Rakibul Islam ◽  
...  

Deep learning's rapid gains in automation are making it more popular in a variety of complex jobs. The self-driving object is an emerging technology that has the potential to transform the entire planet. The steering control of an automated item is critical to ensuring a safe and secure voyage. Consequently, in this study, we developed a methodology for predicting the steering angle only by looking at the front images of a vehicle. In addition, we used an Internet of Things-based system for collecting front images and steering angles. A Raspberry Pi (RP) camera is used in conjunction with a Raspberry Pi (RP) processing unit to capture images from vehicles, and the RP processing unit is used to collect the angles associated with each image. Apart from that, we've made use of deep learning-based algorithms such as VGG16, ResNet-152, DenseNet-201, and Nvidia's models, all of which were trained using labeled training data. Our models are End-to-End CNN models, which do not require extracting elements from data such as roads, lanes, or other objects before predicting steering angle. As a result of our comparative investigation, we can conclude that the Nvidia model's performance was satisfactory, with a Mean Squared Error (MSE) value of 0. But the Nvidia model outperforms the other pre-trained models, even though other models work well.<br>


2021 ◽  
Vol 34 (1) ◽  
Author(s):  
Ze Liu ◽  
Yingfeng Cai ◽  
Hai Wang ◽  
Long Chen

AbstractRadar and LiDAR are two environmental sensors commonly used in autonomous vehicles, Lidars are accurate in determining objects’ positions but significantly less accurate as Radars on measuring their velocities. However, Radars relative to Lidars are more accurate on measuring objects velocities but less accurate on determining their positions as they have a lower spatial resolution. In order to compensate for the low detection accuracy, incomplete target attributes and poor environmental adaptability of single sensors such as Radar and LiDAR, in this paper, an effective method for high-precision detection and tracking of surrounding targets of autonomous vehicles. By employing the Unscented Kalman Filter, Radar and LiDAR information is effectively fused to achieve high-precision detection of the position and speed information of targets around the autonomous vehicle. Finally, the real vehicle test under various driving environment scenarios is carried out. The experimental results show that the proposed sensor fusion method can effectively detect and track the vehicle peripheral targets with high accuracy. Compared with a single sensor, it has obvious advantages and can improve the intelligence level of autonomous cars.


Author(s):  
Sai Rajeev Devaragudi ◽  
Bo Chen

Abstract This paper presents a Model Predictive Control (MPC) approach for longitudinal and lateral control of autonomous vehicles with a real-time local path planning algorithm. A heuristic graph search method (A* algorithm) combined with piecewise Bezier curve generation is implemented for obstacle avoidance in autonomous driving applications. Constant time headway control is implemented for a longitudinal motion to track lead vehicles and maintain a constant time gap. MPC is used to control the steering angle and the tractive force of the autonomous vehicle. Furthermore, a new method of developing Advanced Driver Assistance Systems (ADAS) algorithms and vehicle controllers using Model-In-the-Loop (MIL) testing is explored with the use of PreScan®. With PreScan®, various traffic scenarios are modeled and the sensor data are simulated by using physics-based sensor models, which are fed to the controller for data processing and motion planning. Obstacle detection and collision avoidance are demonstrated using the presented MPC controller.


Author(s):  
DoHyun Daniel Yoon ◽  
Beshah Ayalew

An autonomous driving control system that incorporates notions from human-like social driving could facilitate an efficient integration of hybrid traffic where fully autonomous vehicles (AVs) and human operated vehicles (HOVs) are expected to coexist. This paper aims to develop such an autonomous vehicle control model using the social-force concepts, which was originally formulated for modeling the motion of pedestrians in crowds. In this paper, the social force concept is adapted to vehicular traffic where constituent navigation forces are defined as a target force, object forces, and lane forces. Then, nonlinear model predictive control (NMPC) scheme is formulated to mimic the predictive planning behavior of social human drivers where they are considered to optimize the total social force they perceive. The performance of the proposed social force-based autonomous driving control scheme is demonstrated via simulations of an ego-vehicle in multi-lane road scenarios. From adaptive cruise control (ACC) to smooth lane-changing behaviors, the proposed model provided a flexible yet efficient driving control enabling a safe navigation in various situations while maintaining reasonable vehicle dynamics.


Sign in / Sign up

Export Citation Format

Share Document