Exploiting location information to detect light pole in mobile LiDAR point clouds

Author(s):  
Huan Luo ◽  
Cheng Wang ◽  
Hanyun Wang ◽  
Ziyi Chen ◽  
Dawei Zai ◽  
...  
2021 ◽  
Vol 6 (3) ◽  
pp. 65-73
Author(s):  
Ba-Viet Ngo ◽  
Thanh-Hai Nguyen ◽  
Duc-Dung Vo

This paper aims to reconstruct 3D map based on environmental 3D cloud information of landmarks. A Modified Iterative Closest Point (MICP) algorithm is proposed to apply for merging point clouds through a transformation matrix with values updated using the robot’s position. In particular, reconstruction of 3D map is performed based on the location information of landmarks in an indoor environment. In addition, the transformation matrix obtained using the MICP algorithm will be up-dated again whenever the error among the point clouds is greater than one setup threshold. The result is that the environmental 3D map is reconstructed more accurately compared using the MICP. The experimental results showed that the effectiveness of the proposed method in improving the quality of reconstructing 3D cloud map.


2020 ◽  
Vol 20 (3) ◽  
pp. 13-20
Author(s):  
Jinsoo Kim ◽  
◽  
Hyukjin Kwon ◽  
Dongkyoo Shin ◽  
Sunghoon Hong

Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2020 ◽  
Vol 28 (10) ◽  
pp. 2301-2310
Author(s):  
Chun-kang ZHANG ◽  
◽  
Hong-mei LI ◽  
Xia ZHANG

Sign in / Sign up

Export Citation Format

Share Document