Point cloud measurements-uncertainty calculation on spatial-feature based registration

Sensor Review ◽  
2019 ◽  
Vol 39 (1) ◽  
pp. 129-136
Author(s):  
Lijun Ding ◽  
Shuguang Dai ◽  
Pingan Mu

Purpose Measurement uncertainty calculation is an important and complicated problem in digitised components inspection. In such inspections, a coordinate measuring machine (CMM) and laser scanner are usually used to get the surface point clouds of the component in different postures. Then, the point clouds are registered to construct fully connected point clouds of the component’s surfaces. However, in most cases, the measurement uncertainty is difficult to estimate after the scanned point cloud has been registered. This paper aims to propose a simplified method for calculating the uncertainty of point cloud measurements based on spatial feature registration. Design/methodology/approach In the proposed method, algorithmic models are used to calculate the point cloud measurement uncertainty based on noncontact measurements of the planes, lines and points of the component and spatial feature registration. Findings The measurement uncertainty based on spatial feature registration is related to the mutual position of registration features and the number of sensor commutation in the scanning process, but not to the spatial distribution of the measured feature. The results of experiments conducted verify the efficacy of the proposed method. Originality/value The proposed method provides an efficient algorithm for calculating the measurement uncertainty of registration point clouds based on part features, and therefore has important theoretical and practical significance in digitised components inspection.

Author(s):  
Bernardo Lourenço ◽  
Tiago Madeira ◽  
Paulo Dias ◽  
Vitor M. Ferreira Santos ◽  
Miguel Oliveira

Purpose 2D laser rangefinders (LRFs) are commonly used sensors in the field of robotics, as they provide accurate range measurements with high angular resolution. These sensors can be coupled with mechanical units which, by granting an additional degree of freedom to the movement of the LRF, enable the 3D perception of a scene. To be successful, this reconstruction procedure requires to evaluate with high accuracy the extrinsic transformation between the LRF and the motorized system. Design/methodology/approach In this work, a calibration procedure is proposed to evaluate this transformation. The method does not require a predefined marker (commonly used despite its numerous disadvantages), as it uses planar features in the point acquired clouds. Findings Qualitative inspections show that the proposed method reduces artifacts significantly, which typically appear in point clouds because of inaccurate calibrations. Furthermore, quantitative results and comparisons with a high-resolution 3D scanner demonstrate that the calibrated point cloud represents the geometries present in the scene with much higher accuracy than with the un-calibrated point cloud. Practical implications The last key point of this work is the comparison of two laser scanners: the lemonbot (authors’) and a commercial FARO scanner. Despite being almost ten times cheaper, the laser scanner was able to achieve similar results in terms of geometric accuracy. Originality/value This work describes a novel calibration technique that is easy to implement and is able to achieve accurate results. One of its key features is the use of planes to calibrate the extrinsic transformation.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Siyuan Huang ◽  
Limin Liu ◽  
Jian Dong ◽  
Xiongjun Fu ◽  
Leilei Jia

Purpose Most of the existing ground filtering algorithms are based on the Cartesian coordinate system, which is not compatible with the working principle of mobile light detection and ranging and difficult to obtain good filtering accuracy. The purpose of this paper is to improve the accuracy of ground filtering by making full use of the order information between the point and the point in the spherical coordinate. Design/methodology/approach First, the cloth simulation (CS) algorithm is modified into a sorting algorithm for scattered point clouds to obtain the adjacent relationship of the point clouds and to generate a matrix containing the adjacent information of the point cloud. Then, according to the adjacent information of the points, a projection distance comparison and local slope analysis are simultaneously performed. These results are integrated to process the point cloud details further and the algorithm is finally used to filter a point cloud in a scene from the KITTI data set. Findings The results show that the accuracy of KITTI point cloud sorting is 96.3% and the kappa coefficient of the ground filtering result is 0.7978. Compared with other algorithms applied to the same scene, the proposed algorithm has higher processing accuracy. Research limitations/implications Steps of the algorithm are parallel computing, which saves time owing to the small amount of computation. In addition, the generality of the algorithm is improved and it could be used for different data sets from urban streets. However, due to the lack of point clouds from the field environment with labeled ground points, the filtering result of this algorithm in the field environment needs further study. Originality/value In this study, the point cloud neighboring information was obtained by a modified CS algorithm. The ground filtering algorithm distinguish ground points and off-ground points according to the flatness, continuity and minimality of ground points in point cloud data. In addition, it has little effect on the algorithm results if thresholds were changed.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Linh Truong-Hong ◽  
Roderik Lindenbergh ◽  
Thu Anh Nguyen

PurposeTerrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation estimation strongly depends on quality of each step of a workflow, which are not fully addressed. This study aims to give insight error of these steps, and results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. Thus, the main contributions of the paper are investigating point cloud registration error affecting resulting deformation estimation, identifying an appropriate segmentation method used to extract data points of a deformed surface, investigating a methodology to determine an un-deformed or a reference surface for estimating deformation, and proposing a methodology to minimize the impact of outlier, noisy data and/or mixed pixels on deformation estimation.Design/methodology/approachIn practice, the quality of data point clouds and of surface extraction strongly impacts on resulting deformation estimation based on laser scanning point clouds, which can cause an incorrect decision on the state of the structure if uncertainty is available. In an effort to have more comprehensive insight into those impacts, this study addresses four issues: data errors due to data registration from multiple scanning stations (Issue 1), methods used to extract point clouds of structure surfaces (Issue 2), selection of the reference surface Sref to measure deformation (Issue 3), and available outlier and/or mixed pixels (Issue 4). This investigation demonstrates through estimating deformation of the bridge abutment, building and an oil storage tank.FindingsThe study shows that both random sample consensus (RANSAC) and region growing–based methods [a cell-based/voxel-based region growing (CRG/VRG)] can be extracted data points of surfaces, but RANSAC is only applicable for a primary primitive surface (e.g. a plane in this study) subjected to a small deformation (case study 2 and 3) and cannot eliminate mixed pixels. On another hand, CRG and VRG impose a suitable method applied for deformed, free-form surfaces. In addition, in practice, a reference surface of a structure is mostly not available. The use of a fitting plane based on a point cloud of a current surface would cause unrealistic and inaccurate deformation because outlier data points and data points of damaged areas affect an accuracy of the fitting plane. This study would recommend the use of a reference surface determined based on a design concept/specification. A smoothing method with a spatial interval can be effectively minimize, negative impact of outlier, noisy data and/or mixed pixels on deformation estimation.Research limitations/implicationsDue to difficulty in logistics, an independent measurement cannot be established to assess the deformation accuracy based on TLS data point cloud in the case studies of this research. However, common laser scanners using the time-of-flight or phase-shift principle provide point clouds with accuracy in the order of 1–6 mm, while the point clouds of triangulation scanners have sub-millimetre accuracy.Practical implicationsThis study aims to give insight error of these steps, and the results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds.Social implicationsThe results of this study would provide guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. A low-cost method can be applied for deformation analysis of the structure.Originality/valueAlthough a large amount of the studies used laser scanning to measure structure deformation in the last two decades, the methods mainly applied were to measure change between two states (or epochs) of the structure surface and focused on quantifying deformation-based TLS point clouds. Those studies proved that a laser scanner could be an alternative unit to acquire spatial information for deformation monitoring. However, there are still challenges in establishing an appropriate procedure to collect a high quality of point clouds and develop methods to interpret the point clouds to obtain reliable and accurate deformation, when uncertainty, including data quality and reference information, is available. Therefore, this study demonstrates the impact of data quality in a term of point cloud registration error, selected methods for extracting point clouds of surfaces, identifying reference information, and available outlier, noisy data and/or mixed pixels on deformation estimation.


2021 ◽  
Vol 18 (6) ◽  
pp. 172988142110555
Author(s):  
Jie Wang ◽  
Shuxiao Li

Accurately detecting the appropriate grasp configurations is the central task for the robot to grasp an object. Existing grasp detection methods usually overlook the depth image or only regard it as a two-dimensional distance image, which makes it difficult to capture the three-dimensional structural characteristics of target object. In this article, we transform the depth image to point cloud and propose a two-stage grasp detection method based on candidate grasp detection from RGB image and spatial feature rescoring from point cloud. Specifically, we first adopt the recently proposed high-performance rotation object detection method for aerial images, named R3Det, to grasp detection task, obtaining the candidate grasp boxes and their appearance scores. Then, point clouds within each candidate grasp box are normalized and evaluated to get the point cloud quality scores, which are fused with the established point cloud quantity scoring model to obtain spatial scores. Finally, appearance scores and their corresponding spatial scores are combined to output high-quality grasp detection results. The proposed method effectively fuses three types of grasp scoring modules, thus is called Score Fusion Grasp Net. Besides, we propose and adopt top-k grasp metric to effectively reflect the success rate of algorithm in actual grasp execution. Score Fusion Grasp Net obtains 98.5% image-wise accuracy and 98.1% object-wise accuracy on Cornell Grasp Dataset, which exceeds the performances of state-of-the-art methods. We also use the robotic arm to conduct physical grasp experiments on 15 kinds of household objects and 11 kinds of adversarial objects. The results show that the proposed method still has a high success rate when facing new objects.


Author(s):  
Mehran Mahboubkhah ◽  
Mohammad Aliakbari ◽  
Colin Burvill

Measurement and quality control of turbine blades is critical to the successful operation of power plants. It has a key role in manufacturing and reverse engineering. Novel technologies continue to be developed to measure parts with complex geometries, such as turbine blades. Digitizing techniques, using both contact and noncontact methods, are used. Selecting the most appropriate digitizing method for a turbine blade requires consideration of the measuring performance of the alternative methods, including criteria such as accuracy, speed and cost. This study seeks to evaluate the practical accuracy and efficiency of various contact and noncontact digitizing methods through measurement and associated quality control of a complex part, that is, a turbine blade airfoil. Four popular technologies, using distinct underlying measurement methods, were chosen to measure a Frame 5 gas turbine blade, namely, a touch trigger probe mounted on a Zeiss coordinate measuring machine, a touch scanning probe and a spot laser probe separately mounted on Renishaw coordinate measuring machine and a linear laser system from ZScanner. The measured point cloud resulting from each method was then used to reconstruct three-dimensional computer-aided design models of the blade. The accuracy of each measuring system was evaluated against the original blade. The evaluation incorporated a comparative study of design parameters derived from the point cloud and reconstructed surfaces associated with each measurement method. The maximum error of point clouds were −123, 2530 and 2173 µm for the ZScanner linear laser, Renishaw spot laser and Renishaw touch scan, respectively. These measured errors indicated higher accuracy from linear laser method than spot laser scanning and touch scanning methods. Furthermore, the achieved standard deviations of 42, 170 and 269 µm for point clouds of ZScanner linear laser, Renishaw spot laser and Renishaw touch scan, respectively, showed that the manufacturer reported that information cannot be always reliable.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2021 ◽  
Vol 13 (5) ◽  
pp. 957
Author(s):  
Guglielmo Grechi ◽  
Matteo Fiorucci ◽  
Gian Marco Marmoni ◽  
Salvatore Martino

The study of strain effects in thermally-forced rock masses has gathered growing interest from engineering geology researchers in the last decade. In this framework, digital photogrammetry and infrared thermography have become two of the most exploited remote surveying techniques in engineering geology applications because they can provide useful information concerning geomechanical and thermal conditions of these complex natural systems where the mechanical role of joints cannot be neglected. In this paper, a methodology is proposed for generating point clouds of rock masses prone to failure, combining the high geometric accuracy of RGB optical images and the thermal information derived by infrared thermography surveys. Multiple 3D thermal point clouds and a high-resolution RGB point cloud were separately generated and co-registered by acquiring thermograms at different times of the day and in different seasons using commercial software for Structure from Motion and point cloud analysis. Temperature attributes of thermal point clouds were merged with the reference high-resolution optical point cloud to obtain a composite 3D model storing accurate geometric information and multitemporal surface temperature distributions. The quality of merged point clouds was evaluated by comparing temperature distributions derived by 2D thermograms and 3D thermal models, with a view to estimating their accuracy in describing surface thermal fields. Moreover, a preliminary attempt was made to test the feasibility of this approach in investigating the thermal behavior of complex natural systems such as jointed rock masses by analyzing the spatial distribution and temporal evolution of surface temperature ranges under different climatic conditions. The obtained results show that despite the low resolution of the IR sensor, the geometric accuracy and the correspondence between 2D and 3D temperature measurements are high enough to consider 3D thermal point clouds suitable to describe surface temperature distributions and adequate for monitoring purposes of jointed rock mass.


2021 ◽  
Vol 13 (11) ◽  
pp. 2195
Author(s):  
Shiming Li ◽  
Xuming Ge ◽  
Shengfu Li ◽  
Bo Xu ◽  
Zhendong Wang

Today, mobile laser scanning and oblique photogrammetry are two standard urban remote sensing acquisition methods, and the cross-source point-cloud data obtained using these methods have significant differences and complementarity. Accurate co-registration can make up for the limitations of a single data source, but many existing registration methods face critical challenges. Therefore, in this paper, we propose a systematic incremental registration method that can successfully register MLS and photogrammetric point clouds in the presence of a large number of missing data, large variations in point density, and scale differences. The robustness of this method is due to its elimination of noise in the extracted linear features and its 2D incremental registration strategy. There are three main contributions of our work: (1) the development of an end-to-end automatic cross-source point-cloud registration method; (2) a way to effectively extract the linear feature and restore the scale; and (3) an incremental registration strategy that simplifies the complex registration process. The experimental results show that this method can successfully achieve cross-source data registration, while other methods have difficulty obtaining satisfactory registration results efficiently. Moreover, this method can be extended to more point-cloud sources.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1304
Author(s):  
Wenchao Wu ◽  
Yongguang Hu ◽  
Yongzong Lu

Plant leaf 3D architecture changes during growth and shows sensitive response to environmental stresses. In recent years, acquisition and segmentation methods of leaf point cloud developed rapidly, but 3D modelling leaf point clouds has not gained much attention. In this study, a parametric surface modelling method was proposed for accurately fitting tea leaf point cloud. Firstly, principal component analysis was utilized to adjust posture and position of the point cloud. Then, the point cloud was sliced into multiple sections, and some sections were selected to generate a point set to be fitted (PSF). Finally, the PSF was fitted into non-uniform rational B-spline (NURBS) surface. Two methods were developed to generate the ordered PSF and the unordered PSF, respectively. The PSF was firstly fitted as B-spline surface and then was transformed to NURBS form by minimizing fitting error, which was solved by particle swarm optimization (PSO). The fitting error was specified as weighted sum of the root-mean-square error (RMSE) and the maximum value (MV) of Euclidean distances between fitted surface and a subset of the point cloud. The results showed that the proposed modelling method could be used even if the point cloud is largely simplified (RMSE < 1 mm, MV < 2 mm, without performing PSO). Future studies will model wider range of leaves as well as incomplete point cloud.


2021 ◽  
Vol 13 (14) ◽  
pp. 2770
Author(s):  
Shengjing Tian ◽  
Xiuping Liu ◽  
Meng Liu ◽  
Yuhao Bian ◽  
Junbin Gao ◽  
...  

Object tracking from LiDAR point clouds, which are always incomplete, sparse, and unstructured, plays a crucial role in urban navigation. Some existing methods utilize a learned similarity network for locating the target, immensely limiting the advancements in tracking accuracy. In this study, we leveraged a powerful target discriminator and an accurate state estimator to robustly track target objects in challenging point cloud scenarios. Considering the complex nature of estimating the state, we extended the traditional Lucas and Kanade (LK) algorithm to 3D point cloud tracking. Specifically, we propose a state estimation subnetwork that aims to learn the incremental warp for updating the coarse target state. Moreover, to obtain a coarse state, we present a simple yet efficient discrimination subnetwork. It can project 3D shapes into a more discriminatory latent space by integrating the global feature into each point-wise feature. Experiments on KITTI and PandaSet datasets showed that compared with the most advanced of other methods, our proposed method can achieve significant improvements—in particular, up to 13.68% on KITTI.


Sign in / Sign up

Export Citation Format

Share Document