scholarly journals The Iterative Closest Point Registration Algorithm Based on the Normal Distribution Transformation

2019 ◽  
Vol 147 ◽  
pp. 181-190 ◽  
Author(s):  
Xiuying Shi ◽  
Jianjun Peng ◽  
Jiping Li ◽  
Pitao Yan ◽  
Hangyu Gong
Materials ◽  
2021 ◽  
Vol 14 (6) ◽  
pp. 1563
Author(s):  
Ruibing Wu ◽  
Ziping Yu ◽  
Donghong Ding ◽  
Qinghua Lu ◽  
Zengxi Pan ◽  
...  

As promising technology with low requirements and high depositing efficiency, Wire Arc Additive Manufacturing (WAAM) can significantly reduce the repair cost and improve the formation quality of molds. To further improve the accuracy of WAAM in repairing molds, the point cloud model that expresses the spatial distribution and surface characteristics of the mold is proposed. Since the mold has a large size, it is necessary to be scanned multiple times, resulting in multiple point cloud models. The point cloud registration, such as the Iterative Closest Point (ICP) algorithm, then plays the role of merging multiple point cloud models to reconstruct a complete data model. However, using the ICP algorithm to merge large point clouds with a low-overlap area is inefficient, time-consuming, and unsatisfactory. Therefore, this paper provides the improved Offset Iterative Closest Point (OICP) algorithm, which is an online fast registration algorithm suitable for intelligent WAAM mold repair technology. The practicality and reliability of the algorithm are illustrated by the comparison results with the standard ICP algorithm and the three-coordinate measuring instrument in the Experimental Setup Section. The results are that the OICP algorithm is feasible for registrations with low overlap rates. For an overlap rate lower than 60% in our experiments, the traditional ICP algorithm failed, while the Root Mean Square (RMS) error reached 0.1 mm, and the rotation error was within 0.5 degrees, indicating the improvement of the proposed OICP algorithm.


Robotica ◽  
2014 ◽  
Vol 34 (7) ◽  
pp. 1630-1658 ◽  
Author(s):  
Ji W. Kim ◽  
Beom H. Lee

SUMMARYThis paper presents what is termed as the supervoxel normal distributions transform (SV-NDT), a novel three-dimensional (3-D) registration algorithm which improves the performance of the three-dimensional normal distributions transform (3-D NDT) significantly. The 3-D NDT partitions a model scan using a 3-D regular grid. Generating normal distributions using the 3-D regular grid causes considerable information loss because the 3-D regular grid does not use any information pertaining to the local surface structures of the model scan. The best type of surface (the constituent unit of each scan) for modeling with one normal distribution is known to be the plane. The SV-NDT reduces the loss of information using a supervoxel-generating algorithm at the partitioning stage. In addition, it uses the information of the local surface structures from the data scan by replacing the Euclidean distance with a function that uses local geometries as well as the Euclidean distance when each point in the data scan is matched to the corresponding normal distribution. Experiments demonstrate that the use of the supervoxel-generating algorithm increases the modeling accuracy of the normal distributions and that the proposed 3-D registration algorithm outperforms the 3-D NDT and other widely used 3-D registration algorithms in terms of robustness and speed on both synthetic and real-world datasets. Additionally, the effect of changing the function to create correspondences is also verified.


2019 ◽  
Vol 50 (5) ◽  
pp. 1267-1280
Author(s):  
Wei Xu ◽  
Xiaoying Fu ◽  
Xia Li ◽  
Ming Wang

Abstract This paper presents a new Bayesian probabilistic forecast (BPF) model to improve the efficiency and reliability of normal distribution transformation and to describe the uncertainties of medium-range forecasting inflows with 10 days forecast horizons. In this model, the inflow data will be transformed twice to a standard normal distribution. The Box–Cox (BC) model is first used to quickly transform the inflow data with a normal distribution, and then, the transformed data are converted to a standard normal distribution by the meta-Gaussian (MG) model. Based on the transformed inflows in the standard normal distribution, the prior and likelihood density functions of the BPF are established, respectively. In this study, the newly developed model is tested on China's Huanren hydropower reservoir and is compared with BPFs using MG and BC, separately. Comparative results show that the new BPF model exhibits significantly improved data transformation efficiency and forecast accuracy.


2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Yongshan Liu ◽  
Dehan Kong ◽  
Dandan Zhao ◽  
Xiang Gong ◽  
Guichun Han

The existing registration algorithms suffer from low precision and slow speed when registering a large amount of point cloud data. In this paper, we propose a point cloud registration algorithm based on feature extraction and matching; the algorithm helps alleviate problems of precision and speed. In the rough registration stage, the algorithm extracts feature points based on the judgment of retention points and bumps, which improves the speed of feature point extraction. In the registration process, FPFH features and Hausdorff distance are used to search for corresponding point pairs, and the RANSAC algorithm is used to eliminate incorrect point pairs, thereby improving the accuracy of the corresponding relationship. In the precise registration phase, the algorithm uses an improved normal distribution transformation (INDT) algorithm. Experimental results show that given a large amount of point cloud data, this algorithm has advantages in both time and precision.


2017 ◽  
Vol 9 (5) ◽  
pp. 433 ◽  
Author(s):  
Lin Li ◽  
Fan Yang ◽  
Haihong Zhu ◽  
Dalin Li ◽  
You Li ◽  
...  

1985 ◽  
Vol 24 (03) ◽  
pp. 120-130 ◽  
Author(s):  
E. Brunner ◽  
N. Neumann

SummaryThe mathematical basis of Zelen’s suggestion [4] of pre randomizing patients in a clinical trial and then asking them for their consent is investigated. The first problem is to estimate the therapy and selection effects. In the simple prerandomized design (PRD) this is possible without any problems. Similar observations have been made by Anbar [1] and McHugh [3]. However, for the double PRD additional assumptions are needed in order to render therapy and selection effects estimable. The second problem is to determine the distribution of the statistics. It has to be taken into consideration that the sample sizes are random variables in the PRDs. This is why the distribution of the statistics can only be determined asymptotically, even under the assumption of normal distribution. The behaviour of the statistics for small samples is investigated by means of simulations, where the statistics considered in the present paper are compared with the statistics suggested by Ihm [2]. It turns out that the statistics suggested in [2] may lead to anticonservative decisions, whereas the “canonical statistics” suggested by Zelen [4] and considered in the present paper keep the level quite well or may lead to slightly conservative decisions, if there are considerable selection effects.


Sign in / Sign up

Export Citation Format

Share Document