scholarly journals Autonomous Vehicles: Vehicle Parameter Estimation Using Variational Bayes and Kinematics

2020 ◽  
Vol 10 (18) ◽  
pp. 6317 ◽  
Author(s):  
Wilfried Wöber ◽  
Georg Novotny ◽  
Lars Mehnen ◽  
Cristina Olaverri-Monreal

On-board sensory systems in autonomous vehicles make it possible to acquire information about the vehicle itself and about its relevant surroundings. With this information the vehicle actuators are able to follow the corresponding control commands and behave accordingly. Localization is thus a critical feature in autonomous driving to define trajectories to follow and enable maneuvers. Localization approaches using sensor data are mainly based on Bayes filters. Whitebox models that are used to this end use kinematics and vehicle parameters, such as wheel radii, to interfere the vehicle’s movement. As a consequence, faulty vehicle parameters lead to poor localization results. On the other hand, blackbox models use motion data to model vehicle behavior without relying on vehicle parameters. Due to their high non-linearity, blackbox approaches outperform whitebox models but faulty behaviour such as overfitting is hardly identifiable without intensive experiments. In this paper, we extend blackbox models using kinematics, by inferring vehicle parameters and then transforming blackbox models into whitebox models. The probabilistic perspective of vehicle movement is extended using random variables representing vehicle parameters. We validated our approach, acquiring and analyzing simulated noisy movement data from mobile robots and vehicles. Results show that it is possible to estimate vehicle parameters with few kinematic assumptions.

2020 ◽  
Author(s):  
Huihui Pan ◽  
Weichao Sun ◽  
Qiming Sun ◽  
Huijun Gao

Abstract Environmental perception is one of the key technologies to realize autonomous vehicles. Autonomous vehicles are often equipped with multiple sensors to form a multi-source environmental perception system. Those sensors are very sensitive to light or background conditions, which will introduce a variety of global and local fault signals that bring great safety risks to autonomous driving system during long-term running. In this paper, a real-time data fusion network with fault diagnosis and fault tolerance mechanism is designed. By introducing prior features to realize the lightweight of the backbone network, the features of the input data can be extracted in real time accurately. Through the temporal and spatial correlation between sensor data, the sensor redundancy is utilized to diagnose the local and global condence of sensor data in real time, eliminate the fault data, and ensure the accuracy and reliability of data fusion. Experiments show that the network achieves the state-of-the-art results in speed and accuracy, and can accurately detect the location of the target when some sensors are out of focus or out of order.


2020 ◽  
pp. 1391-1414
Author(s):  
Fritz Ulbrich ◽  
Simon Sebastian Rotter ◽  
Raul Rojas

Swarm behavior can be applied to many aspects of autonomous driving: e.g. localization, perception, path planning or mapping. A reason for this is that from the information observed by swarm members, e.g. the relative position and speed of other cars, further information can be derived. In this chapter the processing pipeline of a “swarm behavior module” is described step by step from selecting and abstracting sensor data to generating a plan – a drivable trajectory – for an autonomous car. Such a swarm-based path planning can play an important role in a scenario where there is a mixture of human drivers and autonomous cars. Experienced human drivers flow with the traffic and adapt their driving to the environment. They do not follow the traffic rules as strictly as computers do, but they are often using common sense. Autonomous cars should not provoke dangerous situations by sticking absolutely to the traffic rules, they must adapt their behavior with respect to the other drivers around them and thus merge with the traffic swarm.


Author(s):  
Sai Rajeev Devaragudi ◽  
Bo Chen

Abstract This paper presents a Model Predictive Control (MPC) approach for longitudinal and lateral control of autonomous vehicles with a real-time local path planning algorithm. A heuristic graph search method (A* algorithm) combined with piecewise Bezier curve generation is implemented for obstacle avoidance in autonomous driving applications. Constant time headway control is implemented for a longitudinal motion to track lead vehicles and maintain a constant time gap. MPC is used to control the steering angle and the tractive force of the autonomous vehicle. Furthermore, a new method of developing Advanced Driver Assistance Systems (ADAS) algorithms and vehicle controllers using Model-In-the-Loop (MIL) testing is explored with the use of PreScan®. With PreScan®, various traffic scenarios are modeled and the sensor data are simulated by using physics-based sensor models, which are fed to the controller for data processing and motion planning. Obstacle detection and collision avoidance are demonstrated using the presented MPC controller.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Jun Wang ◽  
Li Zhang ◽  
Yanjun Huang ◽  
Jian Zhao ◽  
Francesco Bella

Autonomous vehicle (AV) is regarded as the ultimate solution to future automotive engineering; however, safety still remains the key challenge for the development and commercialization of the AVs. Therefore, a comprehensive understanding of the development status of AVs and reported accidents is becoming urgent. In this article, the levels of automation are reviewed according to the role of the automated system in the autonomous driving process, which will affect the frequency of the disengagements and accidents when driving in autonomous modes. Additionally, the public on-road AV accident reports are statistically analyzed. The results show that over 3.7 million miles have been tested for AVs by various manufacturers from 2014 to 2018. The AVs are frequently taken over by drivers if they deem necessary, and the disengagement frequency varies significantly from 2 × 10−4 to 3 disengagements per mile for different manufacturers. In addition, 128 accidents in 2014–2018 are studied, and about 63% of the total accidents are caused in autonomous mode. A small fraction of the total accidents (∼6%) is directly related to the AVs, while 94% of the accidents are passively initiated by the other parties, including pedestrians, cyclists, motorcycles, and conventional vehicles. These safety risks identified during on-road testing, represented by disengagements and actual accidents, indicate that the passive accidents which are caused by other road users are the majority. The capability of AVs to alert and avoid safety risks caused by the other parties and to make safe decisions to prevent possible fatal accidents would significantly improve the safety of AVs. Practical applications. This literature review summarizes the safety-related issues for AVs by theoretical analysis of the AV systems and statistical investigation of the disengagement and accident reports for on-road testing, and the findings will help inform future research efforts for AV developments.


2018 ◽  
Vol 66 (2) ◽  
pp. 183-191 ◽  
Author(s):  
Tobias Quack ◽  
Michael Bösinger ◽  
Frank-Josef Heßeler ◽  
Dirk Abel

Abstract One major key to autonomous driving is reliable knowledge about the vehicle's surroundings. In complex situations like urban intersections, the vehicle's on-board sensors are often unable to detect and classify all features of the environment. Therefore, high-precision digital maps are widely used to provide the vehicle with additional information. In this article, we introduce a system which makes use of a mobile edge computing architecture (MEC) for computing digital maps on infrastructure-based, distributed computers. In cooperation with the mobile network operator Vodafone an LTE test field is implemented at the Aldenhoven Testing Center (ATC). The proving ground thus combines an urban crossing with the MEC capabilities of the LTE test field so that the developed methods can be tested in a realistic scenario. In the near future the LTE test field will be equipped with the new 5G mobile standard allowing for fast and reliable exchange of map and sensor data between vehicles and infrastructure.


2020 ◽  
Vol 34 (08) ◽  
pp. 13255-13260
Author(s):  
Mahdi Elhousni ◽  
Yecheng Lyu ◽  
Ziming Zhang ◽  
Xinming Huang

In a world where autonomous driving cars are becoming increasingly more common, creating an adequate infrastructure for this new technology is essential. This includes building and labeling high-definition (HD) maps accurately and efficiently. Today, the process of creating HD maps requires a lot of human input, which takes time and is prone to errors. In this paper, we propose a novel method capable of generating labelled HD maps from raw sensor data. We implemented and tested our methods on several urban scenarios using data collected from our test vehicle. The results show that the proposed deep learning based method can produce highly accurate HD maps. This approach speeds up the process of building and labeling HD maps, which can make meaningful contribution to the deployment of autonomous vehicles.


Author(s):  
Jiayuan Dong ◽  
Emily Lawson ◽  
Jack Olsen ◽  
Myounghoon Jeon

Driving agents can provide an effective solution to improve drivers’ trust in and to manage interactions with autonomous vehicles. Research has focused on voice-agents, while few have explored robot-agents or the comparison between the two. The present study tested two variables - voice gender and agent embodiment, using conversational scripts. Twenty participants experienced autonomous driving using the simulator for four agent conditions and filled out subjective questionnaires for their perception of each agent. Results showed that the participants perceived the voice only female agent as more likeable, more comfortable, and more competent than other conditions. Their final preference ranking also favored this agent over the others. Interestingly, eye-tracking data showed that embodied agents did not add more visual distractions than the voice only agents. The results are discussed with the traditional gender stereotype, uncanny valley, and participants’ gender. This study can contribute to the design of in-vehicle agents in the autonomous vehicles and future studies are planned to further identify the underlying mechanisms of user perception on different agents.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3783
Author(s):  
Sumbal Malik ◽  
Manzoor Ahmed Khan ◽  
Hesham El-Sayed

Sooner than expected, roads will be populated with a plethora of connected and autonomous vehicles serving diverse mobility needs. Rather than being stand-alone, vehicles will be required to cooperate and coordinate with each other, referred to as cooperative driving executing the mobility tasks properly. Cooperative driving leverages Vehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I) communication technologies aiming to carry out cooperative functionalities: (i) cooperative sensing and (ii) cooperative maneuvering. To better equip the readers with background knowledge on the topic, we firstly provide the detailed taxonomy section describing the underlying concepts and various aspects of cooperation in cooperative driving. In this survey, we review the current solution approaches in cooperation for autonomous vehicles, based on various cooperative driving applications, i.e., smart car parking, lane change and merge, intersection management, and platooning. The role and functionality of such cooperation become more crucial in platooning use-cases, which is why we also focus on providing more details of platooning use-cases and focus on one of the challenges, electing a leader in high-level platooning. Following, we highlight a crucial range of research gaps and open challenges that need to be addressed before cooperative autonomous vehicles hit the roads. We believe that this survey will assist the researchers in better understanding vehicular cooperation, its various scenarios, solution approaches, and challenges.


Author(s):  
Gaojian Huang ◽  
Christine Petersen ◽  
Brandon J. Pitts

Semi-autonomous vehicles still require drivers to occasionally resume manual control. However, drivers of these vehicles may have different mental states. For example, drivers may be engaged in non-driving related tasks or may exhibit mind wandering behavior. Also, monitoring monotonous driving environments can result in passive fatigue. Given the potential for different types of mental states to negatively affect takeover performance, it will be critical to highlight how mental states affect semi-autonomous takeover. A systematic review was conducted to synthesize the literature on mental states (such as distraction, fatigue, emotion) and takeover performance. This review focuses specifically on five fatigue studies. Overall, studies were too few to observe consistent findings, but some suggest that response times to takeover alerts and post-takeover performance may be affected by fatigue. Ultimately, this review may help researchers improve and develop real-time mental states monitoring systems for a wide range of application domains.


2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


Sign in / Sign up

Export Citation Format

Share Document