scholarly journals Demo: Attacking Multi-Sensor Fusion based Localization in High-Level Autonomous Driving

Author(s):  
Junjie Shen ◽  
Jun Yeon Won ◽  
Zeyuan Chen ◽  
Qi Alfred Chen
Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3783
Author(s):  
Sumbal Malik ◽  
Manzoor Ahmed Khan ◽  
Hesham El-Sayed

Sooner than expected, roads will be populated with a plethora of connected and autonomous vehicles serving diverse mobility needs. Rather than being stand-alone, vehicles will be required to cooperate and coordinate with each other, referred to as cooperative driving executing the mobility tasks properly. Cooperative driving leverages Vehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I) communication technologies aiming to carry out cooperative functionalities: (i) cooperative sensing and (ii) cooperative maneuvering. To better equip the readers with background knowledge on the topic, we firstly provide the detailed taxonomy section describing the underlying concepts and various aspects of cooperation in cooperative driving. In this survey, we review the current solution approaches in cooperation for autonomous vehicles, based on various cooperative driving applications, i.e., smart car parking, lane change and merge, intersection management, and platooning. The role and functionality of such cooperation become more crucial in platooning use-cases, which is why we also focus on providing more details of platooning use-cases and focus on one of the challenges, electing a leader in high-level platooning. Following, we highlight a crucial range of research gaps and open challenges that need to be addressed before cooperative autonomous vehicles hit the roads. We believe that this survey will assist the researchers in better understanding vehicular cooperation, its various scenarios, solution approaches, and challenges.


2021 ◽  
Vol 11 (9) ◽  
pp. 3921
Author(s):  
Paloma Carrasco ◽  
Francisco Cuesta ◽  
Rafael Caballero ◽  
Francisco J. Perez-Grau ◽  
Antidio Viguria

The use of unmanned aerial robots has increased exponentially in recent years, and the relevance of industrial applications in environments with degraded satellite signals is rising. This article presents a solution for the 3D localization of aerial robots in such environments. In order to truly use these versatile platforms for added-value cases in these scenarios, a high level of reliability is required. Hence, the proposed solution is based on a probabilistic approach that makes use of a 3D laser scanner, radio sensors, a previously built map of the environment and input odometry, to obtain pose estimations that are computed onboard the aerial platform. Experimental results show the feasibility of the approach in terms of accuracy, robustness and computational efficiency.


2021 ◽  
Vol 31 (3) ◽  
pp. 1-26
Author(s):  
Aravind Balakrishnan ◽  
Jaeyoung Lee ◽  
Ashish Gaurav ◽  
Krzysztof Czarnecki ◽  
Sean Sedwards

Reinforcement learning (RL) is an attractive way to implement high-level decision-making policies for autonomous driving, but learning directly from a real vehicle or a high-fidelity simulator is variously infeasible. We therefore consider the problem of transfer reinforcement learning and study how a policy learned in a simple environment using WiseMove can be transferred to our high-fidelity simulator, W ise M ove . WiseMove is a framework to study safety and other aspects of RL for autonomous driving. W ise M ove accurately reproduces the dynamics and software stack of our real vehicle. We find that the accurately modelled perception errors in W ise M ove contribute the most to the transfer problem. These errors, when even naively modelled in WiseMove , provide an RL policy that performs better in W ise M ove than a hand-crafted rule-based policy. Applying domain randomization to the environment in WiseMove yields an even better policy. The final RL policy reduces the failures due to perception errors from 10% to 2.75%. We also observe that the RL policy has significantly less reliance on velocity compared to the rule-based policy, having learned that its measurement is unreliable.


Author(s):  
Sagar Ravi Bhavsar ◽  
Andrei Vatavu ◽  
Timo Rehfeld ◽  
Gunther Krehl

2021 ◽  
Vol 11 (22) ◽  
pp. 10713
Author(s):  
Dong-Gyu Lee

Autonomous driving is a safety-critical application that requires a high-level understanding of computer vision with real-time inference. In this study, we focus on the computational efficiency of an important factor by improving the running time and performing multiple tasks simultaneously for practical applications. We propose a fast and accurate multi-task learning-based architecture for joint segmentation of drivable area, lane line, and classification of the scene. An encoder-decoder architecture efficiently handles input frames through shared representation. A comprehensive understanding of the driving environment is improved by generalization and regularization from different tasks. The proposed method learns end-to-end through multi-task learning on a very challenging Berkeley Deep Drive dataset and shows its robustness for three tasks in autonomous driving. Experimental results show that the proposed method outperforms other multi-task learning approaches in both speed and accuracy. The computational efficiency of the method was over 93.81 fps at inference, enabling execution in real-time.


Author(s):  
Francesco Biral ◽  
Enrico Bertolazzi ◽  
Daniele Bortoluzzi ◽  
Paolo Bosetti

In the last years a great effort has been devoted to the development of autonomous vehicles able to drive in a high range of speeds in semi-structured and unstructured environments. This article presents and discusses the software framework for Hardware-In-the-Loop (HIL) and Software-In-the-Loop (SIL) analysis that has been designed for developing and testing of control laws and mission functionalities of semi-autonomous and autonomous vehicles. The ultimate goal of this project is to develop a robotic system, named RUMBy, able to autonomously plan and execute accurate optimal manoeuvres both in normal and in critical driving situations and to be used as a test platform for advanced decision and autonomous driving algorithms. RUMBy’s hardware is a 1:6 scale gasoline engine R/C car with onboard telemetry and control systems. RUMBy’s software consists of three main modules: the manager module that coordinates the other modules and take high level decision; the motion planner module which is based on a Nonlinear Receding Horizon Control (NRHC) algorithm; the actuation module that produces the driving command for the vehicle. The article describes the details of RUMBy architecture and discusses its modular configuration that easily allows HIL and SIL tests.


2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Mohammad Jalil Piran ◽  
Amjad Ali ◽  
Doug Young Suh

In wireless sensor networks, sensor fusion is employed to integrate the acquired data from diverse sensors to provide a unified interpretation. The best and most salient advantage of sensor fusion is to obtain high-level information in both statistical and definitive aspects, which cannot be attained by a single sensor. In this paper, we propose a novel sensor fusion technique based on fuzzy theory for our earlier proposed Cognitive Radio-based Vehicular Ad Hoc and Sensor Networks (CR-VASNET). In the proposed technique, we considered four input sensor readings (antecedents) and one output (consequent). The employed mobile nodes in CR-VASNET are supposed to be equipped with diverse sensors, which cater to our antecedent variables, for example, The Jerk, Collision Intensity, and Temperature and Inclination Degree. Crash_Severity is considered as the consequent variable. The processing and fusion of the diverse sensory signals are carried out by fuzzy logic scenario. Accuracy and reliability of the proposed protocol, demonstrated by the simulation results, introduce it as an applicable system to be employed to reduce the causalities rate of the vehicles’ crashes.


2021 ◽  
Vol 9 (2) ◽  
pp. 731-739
Author(s):  
M Hyndhavi, Et. al.

The development of vehicle tracking using sensor fusion is presented in this paper. Advanced driver assistance systems (ADAS) are becoming more popular in recent years. These systems use sensor information for real-time control. To improve the standard and robustness, especially in the presence of environmental noises like varying lighting, weather conditions, and fusion of sensors has been the center of attention in recent studies. Faced with complex traffic conditions, the single sensor has been unable to meet the security requirements of ADAS and autonomous driving. The common environment perception sensors consist of radar, camera, and lidar which have both pros and cons. The sensor fusion is a necessary technology for autonomous driving which provides a better vision and understanding of vehicles surrounding. We mainly focus on highway scenarios that enable an autonomous car to comfortably follow other cars at various speeds while keeping a secure distance and mix the advantages of both sensors with a sensor fusion approach. The radar and vision sensor information are fused to produce robust and accurate measurements. And the experimental results indicate that the comparison of using only radar sensors and sensor fusion of both camera and radar sensors is presented in this paper. The algorithm is described along with simulation results by using MATLAB.


Sign in / Sign up

Export Citation Format

Share Document