Radar and Camera Sensor Fusion with ROS for Autonomous Driving

Author(s):  
Rahul Kumar ◽  
Sujay Jayashankar
Author(s):  
Sagar Ravi Bhavsar ◽  
Andrei Vatavu ◽  
Timo Rehfeld ◽  
Gunther Krehl

2020 ◽  
Vol 14 ◽  
Author(s):  
Enea Ceolini ◽  
Charlotte Frenkel ◽  
Sumit Bam Shrestha ◽  
Gemma Taverni ◽  
Lyes Khacef ◽  
...  

2021 ◽  
Vol 9 (2) ◽  
pp. 731-739
Author(s):  
M Hyndhavi, Et. al.

The development of vehicle tracking using sensor fusion is presented in this paper. Advanced driver assistance systems (ADAS) are becoming more popular in recent years. These systems use sensor information for real-time control. To improve the standard and robustness, especially in the presence of environmental noises like varying lighting, weather conditions, and fusion of sensors has been the center of attention in recent studies. Faced with complex traffic conditions, the single sensor has been unable to meet the security requirements of ADAS and autonomous driving. The common environment perception sensors consist of radar, camera, and lidar which have both pros and cons. The sensor fusion is a necessary technology for autonomous driving which provides a better vision and understanding of vehicles surrounding. We mainly focus on highway scenarios that enable an autonomous car to comfortably follow other cars at various speeds while keeping a secure distance and mix the advantages of both sensors with a sensor fusion approach. The radar and vision sensor information are fused to produce robust and accurate measurements. And the experimental results indicate that the comparison of using only radar sensors and sensor fusion of both camera and radar sensors is presented in this paper. The algorithm is described along with simulation results by using MATLAB.


Author(s):  
Vijay John ◽  
Seiichi Mita ◽  
Annamalai Lakshmanan ◽  
Ali Boyali ◽  
Simon Thompson

Abstract Visible camera-based semantic segmentation and semantic forecasting are important perception tasks in autonomous driving. In semantic segmentation, the current frame's pixel level labels are estimated using the current visible frame. In semantic forecasting, the future frame's pixel-level labels are predicted using the current and the past visible frames and pixel-level labels. While reporting state-of-the-art accuracy, both of these tasks are limited by the visible camera's susceptibility to varying illumination, adverse weather conditions, sunlight and headlight glare etc. In this work, we propose to address these limitations using the deep sensor fusion of the visible and the thermal camera. The proposed sensor fusion framework performs both semantic forecasting as well as an optimal semantic segmentation within a multi-step iterative framework. In the first or forecasting step, the framework predicts the semantic map for the next frame. The predicted semantic map is updated in the second step, when the next visible and thermal frame is observed. The updated semantic map is considered as the optimal semantic map for the given visible-thermal frame. The semantic map forecasting and updating are iteratively performed over time. The estimated semantic maps contain the pedestrian behavior, the free space and the pedestrian crossing labels. The pedestrian behavior is categorized based on their spatial, motion and dynamic orientation information. The proposed framework is validated using the public KAIST dataset. A detailed comparative analysis and ablation study is performed using pixel-level classification and IOU error metrics. The results show that the proposed framework can not only accurately forecast the semantic segmentation map but also accurately update them.


2017 ◽  
Vol 865 ◽  
pp. 429-433
Author(s):  
Sung Bum Park ◽  
Hyeok Chan Kwon ◽  
Dong Hoon Lee

Autonomous cars recognize the surroundings through multiple sensors and make decisions to control the car in order to arrive at destination without driver's interventions. In such environment, if sensor data forgery occurs, it could lead to a big (critical) accident that could threaten the life of the driver. In the paper, a research on a way to get accurate driving information through sensor fusion algorithm that has resilience against data forgery and modulation will be discussed.


Sign in / Sign up

Export Citation Format

Share Document