scholarly journals LiDAR and Camera Fusion Approach for Object Distance Estimation in Self-Driving Vehicles

Symmetry ◽  
2020 ◽  
Vol 12 (2) ◽  
pp. 324 ◽  
Author(s):  
G Ajay Kumar ◽  
Jin Hee Lee ◽  
Jongrak Hwang ◽  
Jaehyeong Park ◽  
Sung Hoon Youn ◽  
...  

The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation, and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the detection of objects at short and long distances. As both the sensors are capable of capturing the different attributes of the environment simultaneously, the integration of those attributes with an efficient fusion approach greatly benefits the reliable and consistent perception of the environment. This paper presents a method to estimate the distance (depth) between a self-driving car and other vehicles, objects, and signboards on its path using the accurate fusion approach. Based on the geometrical transformation and projection, low-level sensor fusion was performed between a camera and LiDAR using a 3D marker. Further, the fusion information is utilized to estimate the distance of objects detected by the RefineDet detector. Finally, the accuracy and performance of the sensor fusion and distance estimation approach were evaluated in terms of quantitative and qualitative analysis by considering real road and simulation environment scenarios. Thus the proposed low-level sensor fusion, based on the computational geometric transformation and projection for object distance estimation proves to be a promising solution for enabling reliable and consistent environment perception ability for autonomous vehicles.

Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 161 ◽  
Author(s):  
Junqiao Zhao ◽  
Yewei Huang ◽  
Xudong He ◽  
Shaoming Zhang ◽  
Chen Ye ◽  
...  

Autonomous parking in an indoor parking lot without human intervention is one of the most demanded and challenging tasks of autonomous driving systems. The key to this task is precise real-time indoor localization. However, state-of-the-art low-level visual feature-based simultaneous localization and mapping systems (VSLAM) suffer in monotonous or texture-less scenes and under poor illumination or dynamic conditions. Additionally, low-level feature-based mapping results are hard for human beings to use directly. In this paper, we propose a semantic landmark-based robust VSLAM for real-time localization of autonomous vehicles in indoor parking lots. The parking slots are extracted as meaningful landmarks and enriched with confidence levels. We then propose a robust optimization framework to solve the aliasing problem of semantic landmarks by dynamically eliminating suboptimal constraints in the pose graph and correcting erroneous parking slots associations. As a result, a semantic map of the parking lot, which can be used by both autonomous driving systems and human beings, is established automatically and robustly. We evaluated the real-time localization performance using multiple autonomous vehicles, and an repeatability of 0.3 m track tracing was achieved at a 10 kph of autonomous driving.


Author(s):  
De Jong Yeong ◽  
Gustavo Velasco-Hernandez ◽  
John Barry ◽  
Joseph Walsh

The market for autonomous vehicles (AV) is expected to experience significant growth over the coming decades and to revolutionize the future of transportation and mobility. The AV is a vehicle that is capable of perceiving its environment and perform driving tasks safely and efficiently with little or no human intervention and is anticipated to eventually replace conventional vehicles. Self-driving vehicles employ various sensors to sense and perceive their surroundings and, also rely on advances in 5G communication technology to achieve this objective. Sensors are fundamental to the perception of surroundings and the development of sensor technologies associated with AVs has advanced at a significant pace in recent years. Despite remarkable advancements, sensors can still fail to operate as required, due to for example, hardware defects, noise and environment conditions. Hence, it is not desirable to rely on a single sensor for any autonomous driving task. The practical approaches shown in recent research is to incorporate multiple, complementary sensors to overcome the shortcomings of individual sensors operating independently. This article reviews the technical performance and capabilities of sensors applicable to autonomous vehicles, mainly focusing on vision cameras, LiDAR and Radar sensors. The review also considers the compatibility of sensors with various software systems enabling the multi-sensor fusion approach for obstacle detection. This review article concludes by highlighting some of the challenges and possible future research directions.


Electronics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 424
Author(s):  
Vijay John ◽  
Seiichi Mita

Object detection is an important perception task in autonomous driving and advanced driver assistance systems. The visible camera is widely used for perception, but its performance is limited by illumination and environmental variations. For robust vision-based perception, we propose a deep learning framework for effective sensor fusion of the visible camera with complementary sensors. A feature-level sensor fusion technique, using skip connection, is proposed for the sensor fusion of the visible camera with the millimeter-wave radar and the thermal camera. The two networks are called the RV-Net and the TV-Net, respectively. These networks have two input branches and one output branch. The input branches contain separate branches for the individual sensor feature extraction, which are then fused in the output perception branch using skip connections. The RVNet and the TVNet simultaneously perform sensor-specific feature extraction, feature-level fusion and object detection within an end-to-end framework. The proposed networks are validated with baseline algorithms on public datasets. The results obtained show that the feature-level sensor fusion is better than baseline early and late fusion frameworks.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6322
Author(s):  
Razvan-Catalin Miclea ◽  
Ciprian Dughir ◽  
Florin Alexa ◽  
Florin Sandru ◽  
Ioan Silea

Visibility is a critical factor for transportation, even if we refer to air, water, or ground transportation. The biggest trend in the automotive industry is autonomous driving, the number of autonomous vehicles will increase exponentially, prompting changes in the industry and user segment. Unfortunately, these vehicles still have some drawbacks and one, always in attention and topical, will be treated in this paper—visibility distance issue in bad weather conditions, particularly in fog. The way and the speed with which vehicles will determine objects, obstacles, pedestrians, or traffic signs, especially in bad visibility, will determine how the vehicle will behave. In this paper, a new experimental set up is featured, for analyzing the effect of the fog when the laser and LIDAR (Light Detection And Ranging) radiation are used in visibility distance estimation on public roads. While using our experimental set up, in the laboratory, the information offered by these measurement systems (laser and LIDAR) are evaluated and compared with results offered by human observers in the same fog conditions. The goal is to validate and unitarily apply the results regarding visibility distance, based on information arrives from different systems that are able to estimate this parameter (in foggy weather conditions). Finally, will be notifying the drivers in case of unexpected situations. It is a combination of stationary and of moving systems. The stationary system will be installed on highways or express roads in areas prone to fog, while the moving systems are, or can be, directly installed on the vehicles (autonomous but also non-autonomous).


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3124
Author(s):  
Hyunjin Bae ◽  
Gu Lee ◽  
Jaeseung Yang ◽  
Gwanjun Shin ◽  
Gyeungho Choi ◽  
...  

In autonomous driving, using a variety of sensors to recognize preceding vehicles at middle and long distances is helpful for improving driving performance and developing various functions. However, if only LiDAR or cameras are used in the recognition stage, it is difficult to obtain the necessary data due to the limitations of each sensor. In this paper, we proposed a method of converting the vision-tracked data into bird’s eye-view (BEV) coordinates using an equation that projects LiDAR points onto an image and a method of fusion between LiDAR and vision-tracked data. Thus, the proposed method was effective through the results of detecting the closest in-path vehicle (CIPV) in various situations. In addition, even when experimenting with the EuroNCAP autonomous emergency braking (AEB) test protocol using the result of fusion, AEB performance was improved through improved cognitive performance than when using only LiDAR. In the experimental results, the performance of the proposed method was proven through actual vehicle tests in various scenarios. Consequently, it was convincing that the proposed sensor fusion method significantly improved the adaptive cruise control (ACC) function in autonomous maneuvering. We expect that this improvement in perception performance will contribute to improving the overall stability of ACC.


Sign in / Sign up

Export Citation Format

Share Document