Accurate registration of point clouds of damaged aero-engine blades

Author(s):  
Hamid Ghorbani ◽  
Farbod Khameneifar

Abstract This paper presents a novel method for aligning the scanned point clouds of damaged blades with their nominal CAD model. To inspect a damaged blade, the blade surface is scanned and the scan data in the form of a point cloud is compared to the nominal CAD model of the blade. To be able to compare the two surfaces, the scanned point cloud and the CAD model must be brought to the same coordinate system via a registration algorithm. The geometric nonconformity between the scanned point cloud and the nominal model stemmed from the damaged regions can affect the registration (alignment) outcome. The alignment errors then cause wrong inspection results. To prevent this from happening, the data points from the damaged regions have to be removed from the alignment calculations. The proposed registration method in this work can accurately and automatically eliminate the unreliable scanned data points of the damaged regions from the registration process. The main feature is a correspondence search technique based on the geometric properties of the local neighborhood of points. By combining the average curvature Hausdorff distance and average Euclidean Hausdorff distance, a metric is defined to locally measure the dissimilarities between the scan data and the nominal model and progressively remove the identified unreliable data points of the damaged regions with each iteration of the fine-tuned alignment algorithm. Implementation results have demonstrated that the proposed method is accurate and robust to noise with superior performance in comparison with the existing methods.

2021 ◽  
Vol 13 (11) ◽  
pp. 2195
Author(s):  
Shiming Li ◽  
Xuming Ge ◽  
Shengfu Li ◽  
Bo Xu ◽  
Zhendong Wang

Today, mobile laser scanning and oblique photogrammetry are two standard urban remote sensing acquisition methods, and the cross-source point-cloud data obtained using these methods have significant differences and complementarity. Accurate co-registration can make up for the limitations of a single data source, but many existing registration methods face critical challenges. Therefore, in this paper, we propose a systematic incremental registration method that can successfully register MLS and photogrammetric point clouds in the presence of a large number of missing data, large variations in point density, and scale differences. The robustness of this method is due to its elimination of noise in the extracted linear features and its 2D incremental registration strategy. There are three main contributions of our work: (1) the development of an end-to-end automatic cross-source point-cloud registration method; (2) a way to effectively extract the linear feature and restore the scale; and (3) an incremental registration strategy that simplifies the complex registration process. The experimental results show that this method can successfully achieve cross-source data registration, while other methods have difficulty obtaining satisfactory registration results efficiently. Moreover, this method can be extended to more point-cloud sources.


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 3908 ◽  
Author(s):  
Pavan Kumar B. N. ◽  
Ashok Kumar Patil ◽  
Chethana B. ◽  
Young Ho Chai

Acquisition of 3D point cloud data (PCD) using a laser scanner and aligning it with a video frame is a new approach that is efficient for retrofitting comprehensive objects in heavy pipeline industrial facilities. This work contributes a generic framework for interactive retrofitting in a virtual environment and an unmanned aerial vehicle (UAV)-based sensory setup design to acquire PCD. The framework adopts a 4-in-1 alignment using a point cloud registration algorithm for a pre-processed PCD alignment with the partial PCD, and frame-by-frame registration method for video alignment. This work also proposes a virtual interactive retrofitting framework that uses pre-defined 3D computer-aided design models (CAD) with a customized graphical user interface (GUI) and visualization of a 4-in-1 aligned video scene from a UAV camera in a desktop environment. Trials were carried out using the proposed framework in a real environment at a water treatment facility. A qualitative and quantitative study was conducted to evaluate the performance of the proposed generic framework from participants by adopting the appropriate questionnaire and retrofitting task-oriented experiment. Overall, it was found that the proposed framework could be a solution for interactive 3D CAD model retrofitting on a combination of UAV sensory setup-acquired PCD and real-time video from the camera in heavy industrial facilities.


2014 ◽  
Vol 513-517 ◽  
pp. 3680-3683 ◽  
Author(s):  
Xiao Xu Leng ◽  
Jun Xiao ◽  
Deng Yu Li

As the first step in 3D point cloud process, registration plays an critical role in determining the quality of subsequent results. In this paper, an initial registration algorithm of point clouds based on random sampling is proposed. In the proposed algorithm, the base points set is first extracted randomly in the target point cloud, next an optimal corresponding points set is got from the source point cloud, then a transform matrix is estimated based on the two sets with least square methods, finally the matrix is applied on the source point cloud. Experimental results show that this algorithm has ideal precision as well as good robustness.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5778
Author(s):  
Baifan Chen ◽  
Hong Chen ◽  
Baojun Song ◽  
Grace Gong

Three-dimensional point cloud registration (PCReg) has a wide range of applications in computer vision, 3D reconstruction and medical fields. Although numerous advances have been achieved in the field of point cloud registration in recent years, large-scale rigid transformation is a problem that most algorithms still cannot effectively handle. To solve this problem, we propose a point cloud registration method based on learning and transform-invariant features (TIF-Reg). Our algorithm includes four modules, which are the transform-invariant feature extraction module, deep feature embedding module, corresponding point generation module and decoupled singular value decomposition (SVD) module. In the transform-invariant feature extraction module, we design TIF in SE(3) (which means the 3D rigid transformation space) which contains a triangular feature and local density feature for points. It fully exploits the transformation invariance of point clouds, making the algorithm highly robust to rigid transformation. The deep feature embedding module embeds TIF into a high-dimension space using a deep neural network, further improving the expression ability of features. The corresponding point cloud is generated using an attention mechanism in the corresponding point generation module, and the final transformation for registration is calculated in the decoupled SVD module. In an experiment, we first train and evaluate the TIF-Reg method on the ModelNet40 dataset. The results show that our method keeps the root mean squared error (RMSE) of rotation within 0.5∘ and the RMSE of translation error close to 0 m, even when the rotation is up to [−180∘, 180∘] or the translation is up to [−20 m, 20 m]. We also test the generalization of our method on the TUM3D dataset using the model trained on Modelnet40. The results show that our method’s errors are close to the experimental results on Modelnet40, which verifies the good generalization ability of our method. All experiments prove that the proposed method is superior to state-of-the-art PCReg algorithms in terms of accuracy and complexity.


Author(s):  
L. Gézero ◽  
C. Antunes

In the last few years, LiDAR sensors installed in terrestrial vehicles have been revealed as an efficient method to collect very dense 3D georeferenced information. The possibility of creating very dense point clouds representing the surface surrounding the sensor, at a given moment, in a very fast, detailed and easy way, shows the potential of this technology to be used for cartography and digital terrain models production in large scale. However, there are still some limitations associated with the use of this technology. When several acquisitions of the same area with the same device, are made, differences between the clouds can be observed. The range of that differences can go from few centimetres to some several tens of centimetres, mainly in urban and high vegetation areas where the occultation of the GNSS system introduces a degradation of the georeferenced trajectory. Along this article a different method point cloud registration is proposed. In addition to the efficiency and speed of execution, the main advantages of the method are related to the fact that the adjustment is continuously made over the trajectory, based on the GPS time. The process is fully automatic and only information recorded in the standard LAS files is used, without the need for any auxiliary information, in particular regarding the trajectory.


2021 ◽  
Vol 16 (4) ◽  
pp. 579-587
Author(s):  
Pitisit Dillon ◽  
Pakinee Aimmanee ◽  
Akihiko Wakai ◽  
Go Sato ◽  
Hoang Viet Hung ◽  
...  

The density-based spatial clustering of applications with noise (DBSCAN) algorithm is a well-known algorithm for spatial-clustering data point clouds. It can be applied to many applications, such as crack detection, rockfall detection, and glacier movement detection. Traditional DBSCAN requires two predefined parameters. Suitable values of these parameters depend upon the distribution of the input point cloud. Therefore, estimating these parameters is challenging. This paper proposed a new version of DBSCAN that can automatically customize the parameters. The proposed method consists of two processes: initial parameter estimation based on grid analysis and DBSCAN based on the divide-and-conquer (DC-DBSCAN) approach, which repeatedly performs DBSCAN on each cluster separately and recursively. To verify the proposed method, we applied it to a 3D point cloud dataset that was used to analyze rockfall events at the Puiggcercos cliff, Spain. The total number of data points used in this study was 15,567. The experimental results show that the proposed method is better than the traditional DBSCAN in terms of purity and NMI scores. The purity scores of the proposed method and the traditional DBSCAN method were 96.22% and 91.09%, respectively. The NMI scores of the proposed method and the traditional DBSCAN method are 0.78 and 0.49, respectively. Also, it can detect events that traditional DBSCAN cannot detect.


2017 ◽  
Vol 36 (13-14) ◽  
pp. 1455-1473 ◽  
Author(s):  
Andreas ten Pas ◽  
Marcus Gualtieri ◽  
Kate Saenko ◽  
Robert Platt

Recently, a number of grasp detection methods have been proposed that can be used to localize robotic grasp configurations directly from sensor data without estimating object pose. The underlying idea is to treat grasp perception analogously to object detection in computer vision. These methods take as input a noisy and partially occluded RGBD image or point cloud and produce as output pose estimates of viable grasps, without assuming a known CAD model of the object. Although these methods generalize grasp knowledge to new objects well, they have not yet been demonstrated to be reliable enough for wide use. Many grasp detection methods achieve grasp success rates (grasp successes as a fraction of the total number of grasp attempts) between 75% and 95% for novel objects presented in isolation or in light clutter. Not only are these success rates too low for practical grasping applications, but the light clutter scenarios that are evaluated often do not reflect the realities of real-world grasping. This paper proposes a number of innovations that together result in an improvement in grasp detection performance. The specific improvement in performance due to each of our contributions is quantitatively measured either in simulation or on robotic hardware. Ultimately, we report a series of robotic experiments that average a 93% end-to-end grasp success rate for novel objects presented in dense clutter.


2020 ◽  
Vol 12 (7) ◽  
pp. 1224 ◽  
Author(s):  
Abdulla Al-Rawabdeh ◽  
Fangning He ◽  
Ayman Habib

The integration of three-dimensional (3D) data defined in different coordinate systems requires the use of well-known registration procedures, which aim to align multiple models relative to a common reference frame. Depending on the achieved accuracy of the estimated transformation parameters, the existing registration procedures are classified as either coarse or fine registration. Coarse registration is typically used to establish a rough alignment between the involved point clouds. Fine registration starts from coarsely aligned point clouds to achieve more precise alignment of the involved datasets. In practice, the acquired/derived point clouds from laser scanning and image-based dense matching techniques usually include an excessive number of points. Fine registration of huge datasets is time-consuming and sometimes difficult to accomplish in a reasonable timeframe. To address this challenge, this paper introduces two down-sampling approaches, which aim to improve the efficiency and accuracy of the iterative closest patch (ICPatch)-based fine registration. The first approach is based on a planar-based adaptive down-sampling strategy to remove redundant points in areas with high point density while keeping the points in lower density regions. The second approach starts with the derivation of the surface normals for the constituents of a given point cloud using their local neighborhoods, which are then represented on a Gaussian sphere. Down-sampling is ultimately achieved by removing the points from the detected peaks in the Gaussian sphere. Experiments were conducted using both simulated and real datasets to verify the feasibility of the proposed down-sampling approaches for providing reliable transformation parameters. Derived experimental results have demonstrated that for most of the registration cases, in which the points are obtained from various mapping platforms (e.g., mobile/static laser scanner or aerial photogrammetry), the first proposed down-sampling approach (i.e., adaptive down-sampling approach) was capable of exceeding the performance of the traditional approaches, which utilize either the original or randomly down-sampled points, in terms of providing smaller Root Mean Square Errors (RMSE) values and a faster convergence rate. However, for some challenging cases, in which the acquired point cloud only has limited geometric constraints, the Gaussian sphere-based approach was capable of providing superior performance as it preserves some critical points for the accurate estimation of the transformation parameters relating the involved point clouds.


Author(s):  
Lee J. Wells ◽  
Mohammed S. Shafae ◽  
Jaime A. Camelio

Ever advancing sensor and measurement technologies continually provide new opportunities for knowledge discovery and quality control (QC) strategies for complex manufacturing systems. One such state-of-the-art measurement technology currently being implemented in industry is the 3D laser scanner, which can rapidly provide millions of data points to represent an entire manufactured part’s surface. This gives 3D laser scanners a significant advantage over competing technologies that typically provide tens or hundreds of data points. Consequently, data collected from 3D laser scanners have a great potential to be used for inspecting parts for surface and feature abnormalities. The current use of 3D point clouds for part inspection falls into two main categories; 1) Extracting feature parameters, which does not complement the nature of 3D point clouds as it wastes valuable data and 2) An ad-hoc manual process where a visual representation of a point cloud (usually as deviations from nominal) is analyzed, which tends to suffer from slow, inefficient, and inconsistent inspection results. Therefore our paper proposes an approach to automate the latter approach to 3D point cloud inspection. The proposed approach uses a newly developed adaptive generalized likelihood ratio (AGLR) technique to identify the most likely size, shape, and magnitude of a potential fault within the point cloud, which transforms the ad-hoc visual inspection approach to a statistically viable automated inspection solution. In order to aid practitioners in designing and implementing an AGLR-based inspection process, our paper also reports the performance of the AGLR with respect to the probability of detecting specific size and magnitude faults in addition to the probability of a false alarms.


2021 ◽  
Vol 11 (10) ◽  
pp. 4538
Author(s):  
Jinbo Liu ◽  
Pengyu Guo ◽  
Xiaoliang Sun

When measuring surface deformation, because the overlap of point clouds before and after deformation is small and the accuracy of the initial value of point cloud registration cannot be guaranteed, traditional point cloud registration methods cannot be applied. In order to solve this problem, a complete solution is proposed, first, by fixing at least three cones to the target. Then, through cone vertices, initial values of the transformation matrix can be calculated. On the basis of this, the point cloud registration can be performed accurately through the iterative closest point (ICP) algorithm using the neighboring point clouds of cone vertices. To improve the automation of this solution, an accurate and automatic point cloud registration method based on biological vision is proposed. First, the three-dimensional (3D) coordinates of cone vertices are obtained through multi-view observation, feature detection, data fusion, and shape fitting. In shape fitting, a closed-form solution of cone vertices is derived on the basis of the quadratic form. Second, a random strategy is designed to calculate the initial values of the transformation matrix between two point clouds. Then, combined with ICP, point cloud registration is realized automatically and precisely. The simulation results showed that, when the intensity of Gaussian noise ranged from 0 to 1 mr (where mr denotes the average mesh resolution of the models), the rotation and translation errors of point cloud registration were less than 0.1° and 1 mr, respectively. Lastly, a camera-projector system to dynamically measure the surface deformation during ablation tests in an arc-heated wind tunnel was developed, and the experimental results showed that the measuring precision for surface deformation exceeded 0.05 mm when surface deformation was smaller than 4 mm.


Sign in / Sign up

Export Citation Format

Share Document