Automatic On-Road Object Detection in LiDAR-Point Cloud Data Using Modified VoxelNet Architecture

Author(s):  
G. N. Nikhil ◽  
Md. Meraz ◽  
Mohd. Javed
2021 ◽  
Author(s):  
Nerea Aranjuelo ◽  
Guus Engels ◽  
David Montero ◽  
Marcos Nieto ◽  
Ignacio Arganda-Carreras ◽  
...  

2015 ◽  
Vol 4 (3) ◽  
pp. 34-58
Author(s):  
Amir Saeed Homainejad

This paper discusses a new approach in object extraction from aerial images with association of point cloud data. The extracted objects are captured in a 3D space for reconstructing a 3D model. The process includes three steps. In the first step the targeted objects are extracted from point cloud data and captured in a 3D space. The objects include buildings, trees, roads and background or terrain. In the second step the extracted objects are registered to the aerial image for assisting the object detection. Finally, the extracted objects from the aerial image are registered on the original 3D model for conversion to the point cloud data and then are captured in a 3D space for reconstructing a new 3D model. The final 3D model is flexible and editable. The objects can be edited, audited, and manipulated without affecting another objects or ruin the 3D model. Also, more data can be integrated in the 3D model improve its quality. The aspects of this project are: to reconstruct the final 3D model, and then each object can be interactively updated or modified without affecting the whole 3D model, and to provide a database for other users such as 3D GIS, city management and planning, Disaster Management System (DBS), and Smart City application.


Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 3964
Author(s):  
Muhammad Imad ◽  
Oualid Doukhi ◽  
Deok-Jin Lee

Three-dimensional object detection utilizing LiDAR point cloud data is an indispensable part of autonomous driving perception systems. Point cloud-based 3D object detection has been a better replacement for higher accuracy than cameras during nighttime. However, most LiDAR-based 3D object methods work in a supervised manner, which means their state-of-the-art performance relies heavily on a large-scale and well-labeled dataset, while these annotated datasets could be expensive to obtain and only accessible in the limited scenario. Transfer learning is a promising approach to reduce the large-scale training datasets requirement, but existing transfer learning object detectors are primarily for 2D object detection rather than 3D. In this work, we utilize the 3D point cloud data more effectively by representing the birds-eye-view (BEV) scene and propose a transfer learning based point cloud semantic segmentation for 3D object detection. The proposed model minimizes the need for large-scale training datasets and consequently reduces the training time. First, a preprocessing stage filters the raw point cloud data to a BEV map within a specific field of view. Second, the transfer learning stage uses knowledge from the previously learned classification task (with more data for training) and generalizes the semantic segmentation-based 2D object detection task. Finally, 2D detection results from the BEV image have been back-projected into 3D in the postprocessing stage. We verify results on two datasets: the KITTI 3D object detection dataset and the Ouster LiDAR-64 dataset, thus demonstrating that the proposed method is highly competitive in terms of mean average precision (mAP up to 70%) while still running at more than 30 frames per second (FPS).


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7221
Author(s):  
Baifan Chen ◽  
Hong Chen ◽  
Dian Yuan ◽  
Lingli Yu

The object detection algorithm based on vehicle-mounted lidar is a key component of the perception system on autonomous vehicles. It can provide high-precision and highly robust obstacle information for the safe driving of autonomous vehicles. However, most algorithms are often based on a large amount of point cloud data, which makes real-time detection difficult. To solve this problem, this paper proposes a 3D fast object detection method based on three main steps: First, the ground segmentation by discriminant image (GSDI) method is used to convert point cloud data into discriminant images for ground points segmentation, which avoids the direct computing of the point cloud data and improves the efficiency of ground points segmentation. Second, the image detector is used to generate the region of interest of the three-dimensional object, which effectively narrows the search range. Finally, the dynamic distance threshold clustering (DDTC) method is designed for different density of the point cloud data, which improves the detection effect of long-distance objects and avoids the over-segmentation phenomenon generated by the traditional algorithm. Experiments have showed that this algorithm can meet the real-time requirements of autonomous driving while maintaining high accuracy.


2019 ◽  
pp. 1324-1349
Author(s):  
Amir Saeed Homainejad

This paper discusses a new approach in object extraction from aerial images with association of point cloud data. The extracted objects are captured in a 3D space for reconstructing a 3D model. The process includes three steps. In the first step the targeted objects are extracted from point cloud data and captured in a 3D space. The objects include buildings, trees, roads and background or terrain. In the second step the extracted objects are registered to the aerial image for assisting the object detection. Finally, the extracted objects from the aerial image are registered on the original 3D model for conversion to the point cloud data and then are captured in a 3D space for reconstructing a new 3D model. The final 3D model is flexible and editable. The objects can be edited, audited, and manipulated without affecting another objects or ruin the 3D model. Also, more data can be integrated in the 3D model improve its quality. The aspects of this project are: to reconstruct the final 3D model, and then each object can be interactively updated or modified without affecting the whole 3D model, and to provide a database for other users such as 3D GIS, city management and planning, Disaster Management System (DBS), and Smart City application.


2021 ◽  
Vol 1878 (1) ◽  
pp. 012058
Author(s):  
H Mansor ◽  
S A A Shukor ◽  
R Wong

Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Sign in / Sign up

Export Citation Format

Share Document