Obstacle Detection for Autonomous Driving Vehicles With Multi-LiDAR Sensor Fusion

Author(s):  
Mingcong Cao ◽  
Junmin Wang

Abstract In contrast to the single-light detection and ranging (LiDAR) system, multi-LiDAR sensors may improve the environmental perception for autonomous vehicles. However, an elaborated guideline of multi-LiDAR data processing is absent in the existing literature. This paper presents a systematic solution for multi-LiDAR data processing, which orderly includes calibration, filtering, clustering, and classification. As the accuracy of obstacle detection is fundamentally determined by noise filtering and object clustering, this paper proposes a novel filtering algorithm and an improved clustering method within the multi-LiDAR framework. To be specific, the applied filtering approach is based on occupancy rates (ORs) of sampling points. Besides, ORs are derived from the sparse “feature seeds” in each searching space. For clustering, the density-based spatial clustering of applications with noise (DBSCAN) is improved with an adaptive searching (AS) algorithm for higher detection accuracy. Besides, more robust and accurate obstacle detection can be achieved by combining AS-DBSCAN with the proposed OR-based filtering. An indoor perception test and an on-road test were conducted on a fully instrumented autonomous hybrid electric vehicle. Experimental results have verified the effectiveness of the proposed algorithms, which facilitate a reliable and applicable solution for obstacle detection.

2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


2021 ◽  
Vol 11 (16) ◽  
pp. 7225
Author(s):  
Eugenio Tramacere ◽  
Sara Luciani ◽  
Stefano Feraco ◽  
Angelo Bonfitto ◽  
Nicola Amati

Self-driving vehicles have experienced an increase in research interest in the last decades. Nevertheless, fully autonomous vehicles are still far from being a common means of transport. This paper presents the design and experimental validation of a processor-in-the-loop (PIL) architecture for an autonomous sports car. The considered vehicle is an all-wheel drive full-electric single-seater prototype. The retained PIL architecture includes all the modules required for autonomous driving at system level: environment perception, trajectory planning, and control. Specifically, the perception pipeline exploits obstacle detection algorithms based on Artificial Intelligence (AI), and the trajectory planning is based on a modified Rapidly-exploring Random Tree (RRT) algorithm based on Dubins curves, while the vehicle is controlled via a Model Predictive Control (MPC) strategy. The considered PIL layout is implemented firstly using a low-cost card-sized computer for fast code verification purposes. Furthermore, the proposed PIL architecture is compared in terms of performance to an alternative PIL using high-performance real-time target computing machine. Both PIL architectures exploit User Datagram Protocol (UDP) protocol to properly communicate with a personal computer. The latter PIL architecture is validated in real-time using experimental data. Moreover, they are also validated with respect to the general autonomous pipeline that runs in parallel on the personal computer during numerical simulation.


Author(s):  
De Jong Yeong ◽  
Gustavo Velasco-Hernandez ◽  
John Barry ◽  
Joseph Walsh

The market for autonomous vehicles (AV) is expected to experience significant growth over the coming decades and to revolutionize the future of transportation and mobility. The AV is a vehicle that is capable of perceiving its environment and perform driving tasks safely and efficiently with little or no human intervention and is anticipated to eventually replace conventional vehicles. Self-driving vehicles employ various sensors to sense and perceive their surroundings and, also rely on advances in 5G communication technology to achieve this objective. Sensors are fundamental to the perception of surroundings and the development of sensor technologies associated with AVs has advanced at a significant pace in recent years. Despite remarkable advancements, sensors can still fail to operate as required, due to for example, hardware defects, noise and environment conditions. Hence, it is not desirable to rely on a single sensor for any autonomous driving task. The practical approaches shown in recent research is to incorporate multiple, complementary sensors to overcome the shortcomings of individual sensors operating independently. This article reviews the technical performance and capabilities of sensors applicable to autonomous vehicles, mainly focusing on vision cameras, LiDAR and Radar sensors. The review also considers the compatibility of sensors with various software systems enabling the multi-sensor fusion approach for obstacle detection. This review article concludes by highlighting some of the challenges and possible future research directions.


2021 ◽  
Vol 34 (1) ◽  
Author(s):  
Ze Liu ◽  
Yingfeng Cai ◽  
Hai Wang ◽  
Long Chen

AbstractRadar and LiDAR are two environmental sensors commonly used in autonomous vehicles, Lidars are accurate in determining objects’ positions but significantly less accurate as Radars on measuring their velocities. However, Radars relative to Lidars are more accurate on measuring objects velocities but less accurate on determining their positions as they have a lower spatial resolution. In order to compensate for the low detection accuracy, incomplete target attributes and poor environmental adaptability of single sensors such as Radar and LiDAR, in this paper, an effective method for high-precision detection and tracking of surrounding targets of autonomous vehicles. By employing the Unscented Kalman Filter, Radar and LiDAR information is effectively fused to achieve high-precision detection of the position and speed information of targets around the autonomous vehicle. Finally, the real vehicle test under various driving environment scenarios is carried out. The experimental results show that the proposed sensor fusion method can effectively detect and track the vehicle peripheral targets with high accuracy. Compared with a single sensor, it has obvious advantages and can improve the intelligence level of autonomous cars.


Author(s):  
Sai Rajeev Devaragudi ◽  
Bo Chen

Abstract This paper presents a Model Predictive Control (MPC) approach for longitudinal and lateral control of autonomous vehicles with a real-time local path planning algorithm. A heuristic graph search method (A* algorithm) combined with piecewise Bezier curve generation is implemented for obstacle avoidance in autonomous driving applications. Constant time headway control is implemented for a longitudinal motion to track lead vehicles and maintain a constant time gap. MPC is used to control the steering angle and the tractive force of the autonomous vehicle. Furthermore, a new method of developing Advanced Driver Assistance Systems (ADAS) algorithms and vehicle controllers using Model-In-the-Loop (MIL) testing is explored with the use of PreScan®. With PreScan®, various traffic scenarios are modeled and the sensor data are simulated by using physics-based sensor models, which are fed to the controller for data processing and motion planning. Obstacle detection and collision avoidance are demonstrated using the presented MPC controller.


Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4158 ◽  
Author(s):  
Yichao Cai ◽  
Dachuan Li ◽  
Xiao Zhou ◽  
Xingang Mou

Environment perception is one of the major issues in autonomous driving systems. In particular, effective and robust drivable road region detection still remains a challenge to be addressed for autonomous vehicles in multi-lane roads, intersections and unstructured road environments. In this paper, a computer vision and neural networks-based drivable road region detection approach is proposed for fixed-route autonomous vehicles (e.g., shuttles, buses and other vehicles operating on fixed routes), using a vehicle-mounted camera, route map and real-time vehicle location. The key idea of the proposed approach is to fuse an image with its corresponding local route map to obtain the map-fusion image (MFI) where the information of the image and route map act as complementary to each other. The information of the image can be utilized in road regions with rich features, while local route map acts as critical heuristics that enable robust drivable road region detection in areas without clear lane marking or borders. A neural network model constructed upon the Convolutional Neural Networks (CNNs), namely FCN-VGG16, is utilized to extract the drivable road region from the fused MFI. The proposed approach is validated using real-world driving scenario videos captured by an industrial camera mounted on a testing vehicle. Experiments demonstrate that the proposed approach outperforms the conventional approach which uses non-fused images in terms of detection accuracy and robustness, and it achieves desirable robustness against undesirable illumination conditions and pavement appearance, as well as projection and map-fusion errors.


Author(s):  
Stefano Feraco ◽  
Angelo Bonfitto ◽  
Nicola Amati ◽  
Andrea Tonoli

This paper presents a redundant multi-object detection method for autonomous driving, exploiting a combination of Light Detection and Ranging (LiDAR) and stereocamera sensors to detect different obstacles. These sensors are used for distinct perception pipelines considering a custom hardware/software architecture deployed on a self-driving electric racing vehicle. Consequently, the creation of a local map with respect to the vehicle position enables development of further local trajectory planning algorithms. The LiDAR-based algorithm exploits segmentation of point clouds for the ground filtering and obstacle detection. The stereocamerabased perception pipeline is based on a Single Shot Detector using a deep learning neural network. The presented algorithm is experimentally validated on the instrumented vehicle during different driving maneuvers.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4357 ◽  
Author(s):  
Babak Shahian Jahromi ◽  
Theja Tulabandhula ◽  
Sabri Cetin

There are many sensor fusion frameworks proposed in the literature using different sensors and fusion methods combinations and configurations. More focus has been on improving the accuracy performance; however, the implementation feasibility of these frameworks in an autonomous vehicle is less explored. Some fusion architectures can perform very well in lab conditions using powerful computational resources; however, in real-world applications, they cannot be implemented in an embedded edge computer due to their high cost and computational need. We propose a new hybrid multi-sensor fusion pipeline configuration that performs environment perception for autonomous vehicles such as road segmentation, obstacle detection, and tracking. This fusion framework uses a proposed encoder-decoder based Fully Convolutional Neural Network (FCNx) and a traditional Extended Kalman Filter (EKF) nonlinear state estimator method. It also uses a configuration of camera, LiDAR, and radar sensors that are best suited for each fusion method. The goal of this hybrid framework is to provide a cost-effective, lightweight, modular, and robust (in case of a sensor failure) fusion system solution. It uses FCNx algorithm that improve road detection accuracy compared to benchmark models while maintaining real-time efficiency that can be used in an autonomous vehicle embedded computer. Tested on over 3K road scenes, our fusion algorithm shows better performance in various environment scenarios compared to baseline benchmark networks. Moreover, the algorithm is implemented in a vehicle and tested using actual sensor data collected from a vehicle, performing real-time environment perception.


2020 ◽  
Author(s):  
Said Easa ◽  
Yang Ma ◽  
Ashraf Elshorbagy ◽  
Ahmed Shaker ◽  
Songnian Li ◽  
...  

The three main elements of autonomous vehicles (AV) are orientation, visibility, and decision. This chapter presents an overview of the implementation of visibility-based technologies and methodologies. The chapter first presents two fundamental aspects that are necessary for understanding the main contents. The first aspect is highway geometric design as it relates to sight distance and highway alignment. The second aspect is mathematical basics, including coordinate transformation and visual space segmentation. Details on the Light Detection and Ranging (Lidar) system, which represents the ‘eye’ of the AV are presented. In particular, a new Lidar 3D mapping system, that can be operated on different platforms and modes for a new mapping scheme is described. The visibility methodologies include two types. Infrastructure visibility mainly addresses high-precision maps and sight obstacle detection. Traffic visibility (vehicles, pedestrians, and cyclists) addresses identification of critical positions and visibility estimation. Then, an overview of the decision element (path planning and intelligent car-following) for the movement of AV is presented. The chapter provides important information for researchers and therefore should help to advance road safety for autonomous vehicles.


2020 ◽  
Author(s):  
Ze Liu ◽  
Feng Ying Cai

Abstract Radar and Lidar are two environmental sensors commonly used in autonomous vehicles,Lidars are accurate in determining objects’ positions but significantly less accurate on measuring their velocities. However, Radars are more accurate on measuring objects velocities but less accurate on determining their positions as they have a lower spatial resolution. In order to compensate for the low detection accuracy, incomplete target attributes and poor environmental adaptability of single sensors such as Radar and LIDAR, we proposed an effective method for high-precision detection and tracking of surrounding targets of autonomous vehicles. By employing the Unscented Kalman Filter, radar and LIDAR information is effectively fused to achieve high-precision detection of the position and speed information of targets around the autonomous vehicle. Finally, we do a variety of driving environment under the real car algorithm verification test. The experimental results show that the proposed sensor fusion method can effectively detect and track the vehicle peripheral targets with high accuracy. Compared with a single sensor, it has obvious advantages and can improve the intelligence level of driverless cars.


Sign in / Sign up

Export Citation Format

Share Document