scholarly journals Road Obstacle Detection for Autonomous Driving Based on u-, v-Disparity Histograms and a Risk Map

2019 ◽  
Vol 27 (3) ◽  
pp. 229-235
Author(s):  
Joon Woong Lee
2019 ◽  
Vol 9 (14) ◽  
pp. 2843 ◽  
Author(s):  
Pierre Duthon ◽  
Michèle Colomb ◽  
Frédéric Bernardin

Autonomous driving is based on innovative technologies that have to ensure that vehicles are driven safely. LiDARs are one of the reference sensors for obstacle detection. However, this technology is affected by adverse weather conditions, especially fog. Different wavelengths are investigated to meet this challenge (905 nm vs. 1550 nm). The influence of wavelength on light transmission in fog is then examined and results reported. A theoretical approach by calculating the extinction coefficient for different wavelengths is presented in comparison to measurements with a spectroradiometer in the range of 350 nm–2450 nm. The experiment took place in the French Cerema PAVIN BPplatform for intelligent vehicles, which makes it possible to reproduce controlled fogs of different density for two types of droplet size distribution. Direct spectroradiometer extinction measurements vary in the same way as the models. Finally, the wavelengths for LiDARs should not be chosen on the basis of fog conditions: there is a small difference (<10%) between the extinction coefficients at 905 nm and 1550 nm for the same emitted power in fog.


Author(s):  
Huanbing Gao ◽  
Lei Liu ◽  
Ya Tian ◽  
Shouyin Lu

This paper presented 3D reconstruction method for road scene with the help of obstacle detection. 3D reconstruction for road scene can be used in autonomous driving, driver assistance system, car navigation systems. However, some errors often rose when 3D reconstructing due to the shade from the moving object in the road scene. The presented 3D reconstruction method with obstacle detection feedback can avoid this problem. Firstly, this paper offers a framework for the 3D reconstruction of road scene by laser scanning and vision. A calibration method based on the location of horizon is proposed, and a method of attitude angle measuring based on vanishing point is proposed to revise the 3D reconstruction result. Secondly, the reconstruction framework is extended by integrating with an object recognition that can automatically detect and discriminate obstacles in the input video streams by a RANSAC approach and threshold filter, and localizes them in the 3D model. 3D reconstruction and obstacle detection are tightly integrated and benefit from each other. The experiment result verified the feasibility and practicability of the proposed method.


10.5772/56603 ◽  
2013 ◽  
Vol 10 (6) ◽  
pp. 261 ◽  
Author(s):  
Hao Sun ◽  
Huanxin Zou ◽  
Shilin Zhou ◽  
Cheng Wang ◽  
Naser El-Sheimy

Author(s):  
Mingcong Cao ◽  
Junmin Wang

Abstract In contrast to the single-light detection and ranging (LiDAR) system, multi-LiDAR sensors may improve the environmental perception for autonomous vehicles. However, an elaborated guideline of multi-LiDAR data processing is absent in the existing literature. This paper presents a systematic solution for multi-LiDAR data processing, which orderly includes calibration, filtering, clustering, and classification. As the accuracy of obstacle detection is fundamentally determined by noise filtering and object clustering, this paper proposes a novel filtering algorithm and an improved clustering method within the multi-LiDAR framework. To be specific, the applied filtering approach is based on occupancy rates (ORs) of sampling points. Besides, ORs are derived from the sparse “feature seeds” in each searching space. For clustering, the density-based spatial clustering of applications with noise (DBSCAN) is improved with an adaptive searching (AS) algorithm for higher detection accuracy. Besides, more robust and accurate obstacle detection can be achieved by combining AS-DBSCAN with the proposed OR-based filtering. An indoor perception test and an on-road test were conducted on a fully instrumented autonomous hybrid electric vehicle. Experimental results have verified the effectiveness of the proposed algorithms, which facilitate a reliable and applicable solution for obstacle detection.


Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 956 ◽  
Author(s):  
Shuo Chang ◽  
Yifan Zhang ◽  
Fan Zhang ◽  
Xiaotong Zhao ◽  
Sai Huang ◽  
...  

For autonomous driving, it is important to detect obstacles in all scales accurately for safety consideration. In this paper, we propose a new spatial attention fusion (SAF) method for obstacle detection using mmWave radar and vision sensor, where the sparsity of radar points are considered in the proposed SAF. The proposed fusion method can be embedded in the feature-extraction stage, which leverages the features of mmWave radar and vision sensor effectively. Based on the SAF, an attention weight matrix is generated to fuse the vision features, which is different from the concatenation fusion and element-wise add fusion. Moreover, the proposed SAF can be trained by an end-to-end manner incorporated with the recent deep learning object detection framework. In addition, we build a generation model, which converts radar points to radar images for neural network training. Numerical results suggest that the newly developed fusion method achieves superior performance in public benchmarking. In addition, the source code will be released in the GitHub.


2021 ◽  
Vol 11 (16) ◽  
pp. 7225
Author(s):  
Eugenio Tramacere ◽  
Sara Luciani ◽  
Stefano Feraco ◽  
Angelo Bonfitto ◽  
Nicola Amati

Self-driving vehicles have experienced an increase in research interest in the last decades. Nevertheless, fully autonomous vehicles are still far from being a common means of transport. This paper presents the design and experimental validation of a processor-in-the-loop (PIL) architecture for an autonomous sports car. The considered vehicle is an all-wheel drive full-electric single-seater prototype. The retained PIL architecture includes all the modules required for autonomous driving at system level: environment perception, trajectory planning, and control. Specifically, the perception pipeline exploits obstacle detection algorithms based on Artificial Intelligence (AI), and the trajectory planning is based on a modified Rapidly-exploring Random Tree (RRT) algorithm based on Dubins curves, while the vehicle is controlled via a Model Predictive Control (MPC) strategy. The considered PIL layout is implemented firstly using a low-cost card-sized computer for fast code verification purposes. Furthermore, the proposed PIL architecture is compared in terms of performance to an alternative PIL using high-performance real-time target computing machine. Both PIL architectures exploit User Datagram Protocol (UDP) protocol to properly communicate with a personal computer. The latter PIL architecture is validated in real-time using experimental data. Moreover, they are also validated with respect to the general autonomous pipeline that runs in parallel on the personal computer during numerical simulation.


Author(s):  
De Jong Yeong ◽  
Gustavo Velasco-Hernandez ◽  
John Barry ◽  
Joseph Walsh

The market for autonomous vehicles (AV) is expected to experience significant growth over the coming decades and to revolutionize the future of transportation and mobility. The AV is a vehicle that is capable of perceiving its environment and perform driving tasks safely and efficiently with little or no human intervention and is anticipated to eventually replace conventional vehicles. Self-driving vehicles employ various sensors to sense and perceive their surroundings and, also rely on advances in 5G communication technology to achieve this objective. Sensors are fundamental to the perception of surroundings and the development of sensor technologies associated with AVs has advanced at a significant pace in recent years. Despite remarkable advancements, sensors can still fail to operate as required, due to for example, hardware defects, noise and environment conditions. Hence, it is not desirable to rely on a single sensor for any autonomous driving task. The practical approaches shown in recent research is to incorporate multiple, complementary sensors to overcome the shortcomings of individual sensors operating independently. This article reviews the technical performance and capabilities of sensors applicable to autonomous vehicles, mainly focusing on vision cameras, LiDAR and Radar sensors. The review also considers the compatibility of sensors with various software systems enabling the multi-sensor fusion approach for obstacle detection. This review article concludes by highlighting some of the challenges and possible future research directions.


Sign in / Sign up

Export Citation Format

Share Document