Fast structural global registration of indoor colored point cloud

Author(s):  
Chen Wang ◽  
Yuhua Xu ◽  
Lin Wang ◽  
Chunming Li
Author(s):  
Zhenchao Ouyang ◽  
Xiaoyun Dong ◽  
Jiahe Cui ◽  
Jianwei Niu ◽  
Mohsen Guizani

Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3729 ◽  
Author(s):  
Shuai Wang ◽  
Hua-Yan Sun ◽  
Hui-Chao Guo ◽  
Lin Du ◽  
Tian-Jian Liu

Global registration is an important step in the three-dimensional reconstruction of multi-view laser point clouds for moving objects, but the severe noise, density variation, and overlap ratio between multi-view laser point clouds present significant challenges to global registration. In this paper, a multi-view laser point cloud global registration method based on low-rank sparse decomposition is proposed. Firstly, the spatial distribution features of point clouds were extracted by spatial rasterization to realize loop-closure detection, and the corresponding weight matrix was established according to the similarities of spatial distribution features. The accuracy of adjacent registration transformation was evaluated, and the robustness of low-rank sparse matrix decomposition was enhanced. Then, the objective function that satisfies the global optimization condition was constructed, which prevented the solution space compression generated by the column-orthogonal hypothesis of the matrix. The objective function was solved by the Augmented Lagrange method, and the iterative termination condition was designed according to the prior conditions of single-object global registration. The simulation analysis shows that the proposed method was robust with a wide range of parameters, and the accuracy of loop-closure detection was over 90%. When the pairwise registration error was below 0.1 rad, the proposed method performed better than the three compared methods, and the global registration accuracy was better than 0.05 rad. Finally, the global registration results of real point cloud experiments further proved the validity and stability of the proposed method.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5331
Author(s):  
Ouk Choi ◽  
Min-Gyu Park ◽  
Youngbae Hwang

We present two algorithms for aligning two colored point clouds. The two algorithms are designed to minimize a probabilistic cost based on the color-supported soft matching of points in a point cloud to their K-closest points in the other point cloud. The first algorithm, like prior iterative closest point algorithms, refines the pose parameters to minimize the cost. Assuming that the point clouds are obtained from RGB-depth images, our second algorithm regards the measured depth values as variables and minimizes the cost to obtain refined depth values. Experiments with our synthetic dataset show that our pose refinement algorithm gives better results compared to the existing algorithms. Our depth refinement algorithm is shown to achieve more accurate alignments from the outputs of the pose refinement step. Our algorithms are applied to a real-world dataset, providing accurate and visually improved results.


Author(s):  
H. A. Lauterbach ◽  
D. Borrmann ◽  
A. Nüchter

3D laser scanners are typically not able to collect color information. Therefore coloring is often done by projecting photos of an additional camera to the 3D scans. The capturing process is time consuming and therefore prone to changes in the environment. The appearance of the colored point cloud is mainly effected by changes of lighting conditions and corresponding camera settings. In case of panorama images these exposure variations are typically corrected by radiometrical aligning the input images to each other. In this paper we adopt existing methods for panorama optimization in order to correct the coloring of point clouds. Therefore corresponding pixels from overlapping images are selected by using geometrically closest points of the registered 3D scans and their neighboring pixels in the images. The dynamic range of images in raw format allows for correction of large exposure differences. Two experiments demonstrate the abilities of the approach.


Sign in / Sign up

Export Citation Format

Share Document