Deep Learning Inspired Object Consolidation Approaches Using LiDAR Data for Autonomous Driving: A Review

Author(s):  
M. S. Mekala ◽  
Woongkyu Park ◽  
Gaurav Dhiman ◽  
Gautam Srivastava ◽  
Ju H. Park ◽  
...  
Electronics ◽  
2021 ◽  
Vol 10 (16) ◽  
pp. 1960
Author(s):  
Dongwan Kang ◽  
Anthony Wong ◽  
Banghyon Lee ◽  
Jungha Kim

Autonomous vehicles perceive objects through various sensors. Cameras, radar, and LiDAR are generally used as vehicle sensors, each of which has its own characteristics. As examples, cameras are used for a high-level understanding of a scene, radar is applied to weather-resistant distance perception, and LiDAR is used for accurate distance recognition. The ability of a camera to understand a scene has overwhelmingly increased with the recent development of deep learning. In addition, technologies that emulate other sensors using a single sensor are being developed. Therefore, in this study, a LiDAR data-based scene understanding method was developed through deep learning. The approaches to accessing LiDAR data through deep learning are mainly divided into point, projection, and voxel methods. The purpose of this study is to apply a projection method to secure a real-time performance. The convolutional neural network method used by a conventional camera can be easily applied to the projection method. In addition, an adaptive break point detector method used for conventional 2D LiDAR information is utilized to solve the misclassification caused by the conversion from 2D into 3D. The results of this study are evaluated through a comparison with other technologies.


Author(s):  
Yao Deng ◽  
Tiehua Zhang ◽  
Guannan Lou ◽  
Xi Zheng ◽  
Jiong Jin ◽  
...  

Author(s):  
Khan Muhammad ◽  
Amin Ullah ◽  
Jaime Lloret ◽  
Javier Del Ser ◽  
Victor Hugo C. de Albuquerque

2020 ◽  
Vol 13 (1) ◽  
pp. 89
Author(s):  
Manuel Carranza-García ◽  
Jesús Torres-Mateo ◽  
Pedro Lara-Benítez ◽  
Jorge García-Gutiérrez

Object detection using remote sensing data is a key task of the perception systems of self-driving vehicles. While many generic deep learning architectures have been proposed for this problem, there is little guidance on their suitability when using them in a particular scenario such as autonomous driving. In this work, we aim to assess the performance of existing 2D detection systems on a multi-class problem (vehicles, pedestrians, and cyclists) with images obtained from the on-board camera sensors of a car. We evaluate several one-stage (RetinaNet, FCOS, and YOLOv3) and two-stage (Faster R-CNN) deep learning meta-architectures under different image resolutions and feature extractors (ResNet, ResNeXt, Res2Net, DarkNet, and MobileNet). These models are trained using transfer learning and compared in terms of both precision and efficiency, with special attention to the real-time requirements of this context. For the experimental study, we use the Waymo Open Dataset, which is the largest existing benchmark. Despite the rising popularity of one-stage detectors, our findings show that two-stage detectors still provide the most robust performance. Faster R-CNN models outperform one-stage detectors in accuracy, being also more reliable in the detection of minority classes. Faster R-CNN Res2Net-101 achieves the best speed/accuracy tradeoff but needs lower resolution images to reach real-time speed. Furthermore, the anchor-free FCOS detector is a slightly faster alternative to RetinaNet, with similar precision and lower memory usage.


2020 ◽  
pp. 106617
Author(s):  
Guofa Li ◽  
Yifan Yang ◽  
Xingda Qu ◽  
Dongpu Cao ◽  
Keqiang Li

2022 ◽  
Author(s):  
Mesfer Al Duhayyim ◽  
Fahd N. Al-Wesabi ◽  
Anwer Mustafa Hilal ◽  
Manar Ahmed Hamza ◽  
Shalini Goel ◽  
...  

2021 ◽  
pp. 228-245
Author(s):  
Zheyi Chen ◽  
Pu Tian ◽  
Weixian Liao ◽  
Wei Yu

Author(s):  
Ying Li ◽  
Lingfei Ma ◽  
Zilong Zhong ◽  
Fei Liu ◽  
Michael A. Chapman ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document