scholarly journals A Robust Linear Feature-Based Procedure for Automated Registration of Point Clouds

Sensors ◽  
2015 ◽  
Vol 15 (1) ◽  
pp. 1435-1457 ◽  
Author(s):  
Martyna Poreba ◽  
François Goulette
2021 ◽  
Vol 13 (18) ◽  
pp. 3571
Author(s):  
Yongbo Wang ◽  
Nanshan Zheng ◽  
Zhengfu Bian ◽  
Hua Zhang

Due to the high complexity of geo-spatial entities and the limited field of view of LiDAR equipment, pairwise registration is a necessary step for integrating point clouds from neighbouring LiDAR stations. Considering that accurate extraction of point features is often difficult without the use of man-made reflectors, and the initial approximate values for the unknown transformation parameters must be estimated in advance to ensure the correct operation of those iterative methods, a closed-form solution to linear feature-based registration of point clouds is proposed in this study. Plücker coordinates are used to represent the linear features in three-dimensional space, whereas dual quaternions are employed to represent the spatial transformation. Based on the theory of least squares, an error norm (objective function) is first constructed by assuming that each pair of corresponding linear features is equivalent after registration. Then, by applying the extreme value analysis to the objective function, detailed derivations of the closed-form solution to the proposed linear feature-based registration method are given step by step. Finally, experimental tests are conducted on a real dataset. The derived experimental result demonstrates the feasibility of the proposed solution: By using eigenvalue decomposition to replace the linearization of the objective function, the proposed solution does not require any initial estimates of the unknown transformation parameters, which assures the stability of the registration method.


2021 ◽  
Vol 13 (11) ◽  
pp. 2195
Author(s):  
Shiming Li ◽  
Xuming Ge ◽  
Shengfu Li ◽  
Bo Xu ◽  
Zhendong Wang

Today, mobile laser scanning and oblique photogrammetry are two standard urban remote sensing acquisition methods, and the cross-source point-cloud data obtained using these methods have significant differences and complementarity. Accurate co-registration can make up for the limitations of a single data source, but many existing registration methods face critical challenges. Therefore, in this paper, we propose a systematic incremental registration method that can successfully register MLS and photogrammetric point clouds in the presence of a large number of missing data, large variations in point density, and scale differences. The robustness of this method is due to its elimination of noise in the extracted linear features and its 2D incremental registration strategy. There are three main contributions of our work: (1) the development of an end-to-end automatic cross-source point-cloud registration method; (2) a way to effectively extract the linear feature and restore the scale; and (3) an incremental registration strategy that simplifies the complex registration process. The experimental results show that this method can successfully achieve cross-source data registration, while other methods have difficulty obtaining satisfactory registration results efficiently. Moreover, this method can be extended to more point-cloud sources.


2021 ◽  
Vol 10 (7) ◽  
pp. 435
Author(s):  
Yongbo Wang ◽  
Nanshan Zheng ◽  
Zhengfu Bian

Since pairwise registration is a necessary step for the seamless fusion of point clouds from neighboring stations, a closed-form solution to planar feature-based registration of LiDAR (Light Detection and Ranging) point clouds is proposed in this paper. Based on the Plücker coordinate-based representation of linear features in three-dimensional space, a quad tuple-based representation of planar features is introduced, which makes it possible to directly determine the difference between any two planar features. Dual quaternions are employed to represent spatial transformation and operations between dual quaternions and the quad tuple-based representation of planar features are given, with which an error norm is constructed. Based on L2-norm-minimization, detailed derivations of the proposed solution are explained step by step. Two experiments were designed in which simulated data and real data were both used to verify the correctness and the feasibility of the proposed solution. With the simulated data, the calculated registration results were consistent with the pre-established parameters, which verifies the correctness of the presented solution. With the real data, the calculated registration results were consistent with the results calculated by iterative methods. Conclusions can be drawn from the two experiments: (1) The proposed solution does not require any initial estimates of the unknown parameters in advance, which assures the stability and robustness of the solution; (2) Using dual quaternions to represent spatial transformation greatly reduces the additional constraints in the estimation process.


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3848
Author(s):  
Xinyue Zhang ◽  
Gang Liu ◽  
Ling Jing ◽  
Siyao Chen

The heart girth parameter is an important indicator reflecting the growth and development of pigs that provides critical guidance for the optimization of healthy pig breeding. To overcome the heavy workloads and poor adaptability of traditional measurement methods currently used in pig breeding, this paper proposes an automated pig heart girth measurement method using two Kinect depth sensors. First, a two-view pig depth image acquisition platform is established for data collection; the two-view point clouds after preprocessing are registered and fused by feature-based improved 4-Point Congruent Set (4PCS) method. Second, the fused point cloud is pose-normalized, and the axillary contour is used to automatically extract the heart girth measurement point. Finally, this point is taken as the starting point to intercept the circumferential perpendicular to the ground from the pig point cloud, and the complete heart girth point cloud is obtained by mirror symmetry. The heart girth is measured along this point cloud using the shortest path method. Using the proposed method, experiments were conducted on two-view data from 26 live pigs. The results showed that the heart girth measurement absolute errors were all less than 4.19 cm, and the average relative error was 2.14%, which indicating a high accuracy and efficiency of this method.


2019 ◽  
Vol 98 ◽  
pp. 175-182 ◽  
Author(s):  
Jisoo Park ◽  
Pileun Kim ◽  
Yong K. Cho ◽  
Junsuk Kang

2020 ◽  
Vol 12 (11) ◽  
pp. 1870 ◽  
Author(s):  
Qingqing Li ◽  
Paavo Nevalainen ◽  
Jorge Peña Queralta ◽  
Jukka Heikkonen ◽  
Tomi Westerlund

Autonomous harvesting and transportation is a long-term goal of the forest industry. One of the main challenges is the accurate localization of both vehicles and trees in a forest. Forests are unstructured environments where it is difficult to find a group of significant landmarks for current fast feature-based place recognition algorithms. This paper proposes a novel approach where local point clouds are matched to a global tree map using the Delaunay triangularization as the representation format. Instead of point cloud based matching methods, we utilize a topology-based method. First, tree trunk positions are registered at a prior run done by a forest harvester. Second, the resulting map is Delaunay triangularized. Third, a local submap of the autonomous robot is registered, triangularized and matched using triangular similarity maximization to estimate the position of the robot. We test our method on a dataset accumulated from a forestry site at Lieksa, Finland. A total length of 200 m of harvester path was recorded by an industrial harvester with a 3D laser scanner and a geolocation unit fixed to the frame. Our experiments show a 12 cm s.t.d. in the location accuracy and with real-time data processing for speeds not exceeding 0.5 m/s. The accuracy and speed limit are realistic during forest operations.


Author(s):  
F.I. Apollonio ◽  
A. Ballabeni ◽  
M. Gaiani ◽  
F. Remondino

Every day new tools and algorithms for automated image processing and 3D reconstruction purposes become available, giving the possibility to process large networks of unoriented and markerless images, delivering sparse 3D point clouds at reasonable processing time. In this paper we evaluate some feature-based methods used to automatically extract the tie points necessary for calibration and orientation procedures, in order to better understand their performances for 3D reconstruction purposes. The performed tests – based on the analysis of the SIFT algorithm and its most used variants – processed some datasets and analysed various interesting parameters and outcomes (e.g. number of oriented cameras, average rays per 3D points, average intersection angles per 3D points, theoretical precision of the computed 3D object coordinates, etc.).


Author(s):  
D. Tosic ◽  
S. Tuttas ◽  
L. Hoegner ◽  
U. Stilla

<p><strong>Abstract.</strong> This work proposes an approach for semantic classification of an outdoor-scene point cloud acquired with a high precision Mobile Mapping System (MMS), with major goal to contribute to the automatic creation of High Definition (HD) Maps. The automatic point labeling is achieved by utilizing the combination of a feature-based approach for semantic classification of point clouds and a deep learning approach for semantic segmentation of images. Both, point cloud data, as well as the data from a multi-camera system are used for gaining spatial information in an urban scene. Two types of classification applied for this task are: 1) Feature-based approach, in which the point cloud is organized into a supervoxel structure for capturing geometric characteristics of points. Several geometric features are then extracted for appropriate representation of the local geometry, followed by removing the effect of local tendency for each supervoxel to enhance the distinction between similar structures. And lastly, the Random Forests (RF) algorithm is applied in the classification phase, for assigning labels to supervoxels and therefore to points within them. 2) The deep learning approach is employed for semantic segmentation of MMS images of the same scene. To achieve this, an implementation of Pyramid Scene Parsing Network is used. Resulting segmented images with each pixel containing a class label are then projected onto the point cloud, enabling label assignment for each point. At the end, experiment results are presented from a complex urban scene and the performance of this method is evaluated on a manually labeled dataset, for the deep learning and feature-based classification individually, as well as for the result of the labels fusion. The achieved overall accuracy with fusioned output is 0.87 on the final test set, which significantly outperforms the results of individual methods on the same point cloud. The labeled data is published on the TUM-PF Semantic-Labeling-Benchmark.</p>


2020 ◽  
Vol 12 (3) ◽  
pp. 401 ◽  
Author(s):  
Ravi ◽  
Habib

LiDAR-based mobile mapping systems (MMS) are rapidly gaining popularity for a multitude of applications due to their ability to provide complete and accurate 3D point clouds for any and every scene of interest. However, an accurate calibration technique for such systems is needed in order to unleash their full potential. In this paper, we propose a fully automated profile-based strategy for the calibration of LiDAR-based MMS. The proposed technique is validated by comparing its accuracy against the expected point positioning accuracy for the point cloud based on the used sensors’ specifications. The proposed strategy was seen to reduce the misalignment between different tracks from approximately 2 to 3 m before calibration down to less than 2 cm after calibration for airborne as well as terrestrial mobile LiDAR mapping systems. In other words, the proposed calibration strategy can converge to correct estimates of mounting parameters, even in cases where the initial estimates are significantly different from the true values. Furthermore, the results from the proposed strategy are also verified by comparing them to those from an existing manually-assisted feature-based calibration strategy. The major contribution of the proposed strategy is its ability to conduct the calibration of airborne and wheel-based mobile systems without any requirement for specially designed targets or features in the surrounding environment. The above claims are validated using experimental results conducted for three different MMS – two airborne and one terrestrial – with one or more LiDAR unit.


Sign in / Sign up

Export Citation Format

Share Document