Creating a 3D Cuboid Map Using Multi-Layer 3D LIDAR with a Swing Mechanism

2018 ◽  
Vol 30 (4) ◽  
pp. 523-531 ◽  
Author(s):  
Yoshihiro Takita ◽  

This paper proposes a method for creating 3D occupancy grid maps using multi-layer 3D LIDAR and a swing mechanism termed Swing-LIDAR. The method using Swing-LIDAR can acquire 10 times more data at a stopping position than a method that does not use Swing-LIDAR. High-definition and accurate terrain information is obtained by a coordinate transformation of the acquired data compensated for by the measured orientation of the system. In this study, we develop a method to create 3D grid maps for autonomous robots using Swing-LIDAR. To validate the method, AR Skipper is run on the created maps that are used to obtain point cloud data without a swing mechanism, and 11 sets of each local map are combined. The experimental results exhibit the differences among the maps.

2020 ◽  
Vol 10 (13) ◽  
pp. 4486 ◽  
Author(s):  
Yongbeom Lee ◽  
Seongkeun Park

In this paper, we propose a deep learning-based perception method in autonomous driving systems using a Light Detection and Ranging(LiDAR) point cloud data, which is called a simultaneous segmentation and detection network (SSADNet). SSADNet can be used to recognize both drivable areas and obstacles, which is necessary for autonomous driving. Unlike the previous methods, where separate networks were needed for segmentation and detection, SSADNet can perform segmentation and detection simultaneously based on a single neural network. The proposed method uses point cloud data obtained from a 3D LiDAR for network input to generate a top view image consisting of three channels of distance, height, and reflection intensity. The structure of the proposed network includes a branch for segmentation and a branch for detection as well as a bridge connecting the two parts. The KITTI dataset, which is often used for experiments on autonomous driving, was used for training. The experimental results show that segmentation and detection can be performed simultaneously for drivable areas and vehicles at a quick inference speed, which is appropriate for autonomous driving systems.


2021 ◽  
Vol 13 (21) ◽  
pp. 4312
Author(s):  
Genping Zhao ◽  
Weiguang Zhang ◽  
Yeping Peng ◽  
Heng Wu ◽  
Zhuowei Wang ◽  
...  

Point cloud classification plays a significant role in Light Detection and Ranging (LiDAR) applications. However, most available multi-scale feature learning networks for large-scale 3D LiDAR point cloud classification tasks are time-consuming. In this paper, an efficient deep neural architecture denoted as Point Expanded Multi-scale Convolutional Network (PEMCNet) is developed to accurately classify the 3D LiDAR point cloud. Different from traditional networks for point cloud processing, PEMCNet includes successive Point Expanded Grouping (PEG) units and Absolute and Relative Spatial Embedding (ARSE) units for representative point feature learning. The PEG unit enables us to progressively increase the receptive field for each observed point and aggregate the feature of a point cloud at different scales but without increasing computation. The ARSE unit following the PEG unit furthermore realizes representative encoding of points relationship, which effectively preserves the geometric details between points. We evaluate our method on both public datasets (the Urban Semantic 3D (US3D) dataset and Semantic3D benchmark dataset) and our new collected Unmanned Aerial Vehicle (UAV) based LiDAR point cloud data of the campus of Guangdong University of Technology. In comparison with four available state-of-the-art methods, our methods ranked first place regarding both efficiency and accuracy. It was observed on the public datasets that with a 2% increase in classification accuracy, over 26% improvement of efficiency was achieved at the same time compared to the second efficient method. Its potential value is also tested on the newly collected point cloud data with over 91% of classification accuracy and 154 ms of processing time.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2477
Author(s):  
Pan Zhang ◽  
Mingming Zhang ◽  
Jingnan Liu

Continuous maintenance and real-time update of high-definition (HD) maps is a big challenge. With the development of autonomous driving, more and more vehicles are equipped with a variety of advanced sensors and a powerful computing platform. Based on mid-to-high-end sensors including an industry camera, a high-end Global Navigation Satellite System (GNSS)/Inertial Measurement Unit (IMU), and an onboard computing platform, a real-time HD map change detection method for crowdsourcing update is proposed in this paper. First, a mature commercial integrated navigation product is directly used to achieve a self-positioning accuracy of 20 cm on average. Second, an improved network based on BiSeNet is utilized for real-time semantic segmentation. It achieves the result of 83.9% IOU (Intersection over Union) on Nvidia Pegasus at 31 FPS. Third, a visual Simultaneous Localization and Mapping (SLAM) associated with pixel type information is performed to obtain the semantic point cloud data of features such as lane dividers, road markings, and other static objects. Finally, the semantic point cloud data is vectorized after denoising and clustering, and the results are matched with a pre-constructed HD map to confirm map elements that have not changed and generate new elements when appearing. The experiment conducted in Beijing shows that the method proposed is effective for crowdsourcing update of HD maps.


Author(s):  
C. Wang ◽  
F. Hu ◽  
D. Sha ◽  
X. Han

Light Detection and Ranging (LiDAR) is one of the most promising technologies in surveying and mapping,city management, forestry, object recognition, computer vision engineer and others. However, it is challenging to efficiently storage, query and analyze the high-resolution 3D LiDAR data due to its volume and complexity. In order to improve the productivity of Lidar data processing, this study proposes a Hadoop-based framework to efficiently manage and process LiDAR data in a distributed and parallel manner, which takes advantage of Hadoop’s storage and computing ability. At the same time, the Point Cloud Library (PCL), an open-source project for 2D/3D image and point cloud processing, is integrated with HDFS and MapReduce to conduct the Lidar data analysis algorithms provided by PCL in a parallel fashion. The experiment results show that the proposed framework can efficiently manage and process big LiDAR data.


2013 ◽  
Author(s):  
Prudhvi Gurram ◽  
Shuowen Hu ◽  
Alex Chan

Author(s):  
J. Sanchez ◽  
F. Denis ◽  
F. Dupont ◽  
L. Trassoudaine ◽  
P. Checchin

Abstract. This paper deals with 3D modeling of building interiors from point clouds captured by a 3D LiDAR scanner. Indeed, currently, the building reconstruction processes remain mostly manual. While LiDAR data have some specific properties which make the reconstruction challenging (anisotropy, noise, clutters, etc.), the automatic methods of the state-of-the-art rely on numerous construction hypotheses which yield 3D models relatively far from initial data. The choice has been done to propose a new modeling method closer to point cloud data, reconstructing only scanned areas of each scene and excluding occluded regions. According to this objective, our approach reconstructs LiDAR scans individually using connected polygons. This modeling relies on a joint processing of an image created from the 2D LiDAR angular sampling and the 3D point cloud associated to one scan. Results are evaluated on synthetic and real data to demonstrate the efficiency as well as the technical strength of the proposed method.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Sign in / Sign up

Export Citation Format

Share Document