Sensor Fusion-based Online Map Validation for Autonomous Driving

Author(s):  
Sagar Ravi Bhavsar ◽  
Andrei Vatavu ◽  
Timo Rehfeld ◽  
Gunther Krehl
2021 ◽  
Vol 9 (2) ◽  
pp. 731-739
Author(s):  
M Hyndhavi, Et. al.

The development of vehicle tracking using sensor fusion is presented in this paper. Advanced driver assistance systems (ADAS) are becoming more popular in recent years. These systems use sensor information for real-time control. To improve the standard and robustness, especially in the presence of environmental noises like varying lighting, weather conditions, and fusion of sensors has been the center of attention in recent studies. Faced with complex traffic conditions, the single sensor has been unable to meet the security requirements of ADAS and autonomous driving. The common environment perception sensors consist of radar, camera, and lidar which have both pros and cons. The sensor fusion is a necessary technology for autonomous driving which provides a better vision and understanding of vehicles surrounding. We mainly focus on highway scenarios that enable an autonomous car to comfortably follow other cars at various speeds while keeping a secure distance and mix the advantages of both sensors with a sensor fusion approach. The radar and vision sensor information are fused to produce robust and accurate measurements. And the experimental results indicate that the comparison of using only radar sensors and sensor fusion of both camera and radar sensors is presented in this paper. The algorithm is described along with simulation results by using MATLAB.


Author(s):  
Vijay John ◽  
Seiichi Mita ◽  
Annamalai Lakshmanan ◽  
Ali Boyali ◽  
Simon Thompson

Abstract Visible camera-based semantic segmentation and semantic forecasting are important perception tasks in autonomous driving. In semantic segmentation, the current frame's pixel level labels are estimated using the current visible frame. In semantic forecasting, the future frame's pixel-level labels are predicted using the current and the past visible frames and pixel-level labels. While reporting state-of-the-art accuracy, both of these tasks are limited by the visible camera's susceptibility to varying illumination, adverse weather conditions, sunlight and headlight glare etc. In this work, we propose to address these limitations using the deep sensor fusion of the visible and the thermal camera. The proposed sensor fusion framework performs both semantic forecasting as well as an optimal semantic segmentation within a multi-step iterative framework. In the first or forecasting step, the framework predicts the semantic map for the next frame. The predicted semantic map is updated in the second step, when the next visible and thermal frame is observed. The updated semantic map is considered as the optimal semantic map for the given visible-thermal frame. The semantic map forecasting and updating are iteratively performed over time. The estimated semantic maps contain the pedestrian behavior, the free space and the pedestrian crossing labels. The pedestrian behavior is categorized based on their spatial, motion and dynamic orientation information. The proposed framework is validated using the public KAIST dataset. A detailed comparative analysis and ablation study is performed using pixel-level classification and IOU error metrics. The results show that the proposed framework can not only accurately forecast the semantic segmentation map but also accurately update them.


2017 ◽  
Vol 865 ◽  
pp. 429-433
Author(s):  
Sung Bum Park ◽  
Hyeok Chan Kwon ◽  
Dong Hoon Lee

Autonomous cars recognize the surroundings through multiple sensors and make decisions to control the car in order to arrive at destination without driver's interventions. In such environment, if sensor data forgery occurs, it could lead to a big (critical) accident that could threaten the life of the driver. In the paper, a research on a way to get accurate driving information through sensor fusion algorithm that has resilience against data forgery and modulation will be discussed.


Author(s):  
De Jong Yeong ◽  
Gustavo Velasco-Hernandez ◽  
John Barry ◽  
Joseph Walsh

The market for autonomous vehicles (AV) is expected to experience significant growth over the coming decades and to revolutionize the future of transportation and mobility. The AV is a vehicle that is capable of perceiving its environment and perform driving tasks safely and efficiently with little or no human intervention and is anticipated to eventually replace conventional vehicles. Self-driving vehicles employ various sensors to sense and perceive their surroundings and, also rely on advances in 5G communication technology to achieve this objective. Sensors are fundamental to the perception of surroundings and the development of sensor technologies associated with AVs has advanced at a significant pace in recent years. Despite remarkable advancements, sensors can still fail to operate as required, due to for example, hardware defects, noise and environment conditions. Hence, it is not desirable to rely on a single sensor for any autonomous driving task. The practical approaches shown in recent research is to incorporate multiple, complementary sensors to overcome the shortcomings of individual sensors operating independently. This article reviews the technical performance and capabilities of sensors applicable to autonomous vehicles, mainly focusing on vision cameras, LiDAR and Radar sensors. The review also considers the compatibility of sensors with various software systems enabling the multi-sensor fusion approach for obstacle detection. This review article concludes by highlighting some of the challenges and possible future research directions.


Sign in / Sign up

Export Citation Format

Share Document