scholarly journals A Registration Method Based on Contour Point Cloud for 3D Whole-Body PET and CT Images

2017 ◽  
Vol 2017 ◽  
pp. 1-11 ◽  
Author(s):  
Zhiying Song ◽  
Huiyan Jiang ◽  
Qiyao Yang ◽  
Zhiguo Wang ◽  
Guoxu Zhang

The PET and CT fusion image, combining the anatomical and functional information, has important clinical meaning. An effective registration of PET and CT images is the basis of image fusion. This paper presents a multithread registration method based on contour point cloud for 3D whole-body PET and CT images. Firstly, a geometric feature-based segmentation (GFS) method and a dynamic threshold denoising (DTD) method are creatively proposed to preprocess CT and PET images, respectively. Next, a new automated trunk slices extraction method is presented for extracting feature point clouds. Finally, the multithread Iterative Closet Point is adopted to drive an affine transform. We compare our method with a multiresolution registration method based on Mattes Mutual Information on 13 pairs (246~286 slices per pair) of 3D whole-body PET and CT data. Experimental results demonstrate the registration effectiveness of our method with lower negative normalization correlation (NC = −0.933) on feature images and less Euclidean distance error (ED = 2.826) on landmark points, outperforming the source data (NC = −0.496, ED = 25.847) and the compared method (NC = −0.614, ED = 16.085). Moreover, our method is about ten times faster than the compared one.

2014 ◽  
Vol 1039 ◽  
pp. 30-35
Author(s):  
Wei Liu ◽  
Lu Yue Ju ◽  
Cheng Hui Lin

Hybrid measurement method is proposed to solve the problem that the partial or whole three-dimensional reconstruction accuracy of aviation engine parts is high. The point clouds of the aviation engine part are captured first using contact and non-contact measuring method. Feature-based parametric modeling strategy is adopted to reconstruct the aviation engine part so that it is easy to be modified in the future. Then, the point cloud data obtained by contact measurement and the reconstructed model are registrated to the same coordinate system to detect the deviation. The point cloud registration method is based upon the feature-based registration method and standard Iterative Closest Point (ICP) algorithm, which help to improve the accuracy of registration. According to the result of deviation, the three-dimensional model can be modified. The accuracy of the modified model is controlled within 0.02mm, satisfying the requirement of aviation engine parts. Three-dimensional reconstruction results have verified the feasibility of the method.


2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Jinho Song ◽  
Kwanghee Ko

In this paper, we propose a method for registering unorganized point clouds without using targets or markers. Motivated by the 4-points congruent sets (4PCS) algorithm, which is a nontarget-based registration method commonly used in the related fields, we develop a feature-based 4PCS algorithm (F-4PCS). The method combines the basic approach of the 4PCS algorithm with geometric feature information to produce consistent global registration results efficiently. We use the features from the point feature histogram descriptor and the ones that capture the surface curvature. The experimental results show that the proposed method successfully registers point clouds of both the outdoor and indoor scenes and demonstrates better performance than the existing 4PCS-based registration methods.


2021 ◽  
Vol 13 (11) ◽  
pp. 2195
Author(s):  
Shiming Li ◽  
Xuming Ge ◽  
Shengfu Li ◽  
Bo Xu ◽  
Zhendong Wang

Today, mobile laser scanning and oblique photogrammetry are two standard urban remote sensing acquisition methods, and the cross-source point-cloud data obtained using these methods have significant differences and complementarity. Accurate co-registration can make up for the limitations of a single data source, but many existing registration methods face critical challenges. Therefore, in this paper, we propose a systematic incremental registration method that can successfully register MLS and photogrammetric point clouds in the presence of a large number of missing data, large variations in point density, and scale differences. The robustness of this method is due to its elimination of noise in the extracted linear features and its 2D incremental registration strategy. There are three main contributions of our work: (1) the development of an end-to-end automatic cross-source point-cloud registration method; (2) a way to effectively extract the linear feature and restore the scale; and (3) an incremental registration strategy that simplifies the complex registration process. The experimental results show that this method can successfully achieve cross-source data registration, while other methods have difficulty obtaining satisfactory registration results efficiently. Moreover, this method can be extended to more point-cloud sources.


Sensors ◽  
2020 ◽  
Vol 20 (14) ◽  
pp. 3848
Author(s):  
Xinyue Zhang ◽  
Gang Liu ◽  
Ling Jing ◽  
Siyao Chen

The heart girth parameter is an important indicator reflecting the growth and development of pigs that provides critical guidance for the optimization of healthy pig breeding. To overcome the heavy workloads and poor adaptability of traditional measurement methods currently used in pig breeding, this paper proposes an automated pig heart girth measurement method using two Kinect depth sensors. First, a two-view pig depth image acquisition platform is established for data collection; the two-view point clouds after preprocessing are registered and fused by feature-based improved 4-Point Congruent Set (4PCS) method. Second, the fused point cloud is pose-normalized, and the axillary contour is used to automatically extract the heart girth measurement point. Finally, this point is taken as the starting point to intercept the circumferential perpendicular to the ground from the pig point cloud, and the complete heart girth point cloud is obtained by mirror symmetry. The heart girth is measured along this point cloud using the shortest path method. Using the proposed method, experiments were conducted on two-view data from 26 live pigs. The results showed that the heart girth measurement absolute errors were all less than 4.19 cm, and the average relative error was 2.14%, which indicating a high accuracy and efficiency of this method.


2014 ◽  
Vol 513-517 ◽  
pp. 3680-3683 ◽  
Author(s):  
Xiao Xu Leng ◽  
Jun Xiao ◽  
Deng Yu Li

As the first step in 3D point cloud process, registration plays an critical role in determining the quality of subsequent results. In this paper, an initial registration algorithm of point clouds based on random sampling is proposed. In the proposed algorithm, the base points set is first extracted randomly in the target point cloud, next an optimal corresponding points set is got from the source point cloud, then a transform matrix is estimated based on the two sets with least square methods, finally the matrix is applied on the source point cloud. Experimental results show that this algorithm has ideal precision as well as good robustness.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5778
Author(s):  
Baifan Chen ◽  
Hong Chen ◽  
Baojun Song ◽  
Grace Gong

Three-dimensional point cloud registration (PCReg) has a wide range of applications in computer vision, 3D reconstruction and medical fields. Although numerous advances have been achieved in the field of point cloud registration in recent years, large-scale rigid transformation is a problem that most algorithms still cannot effectively handle. To solve this problem, we propose a point cloud registration method based on learning and transform-invariant features (TIF-Reg). Our algorithm includes four modules, which are the transform-invariant feature extraction module, deep feature embedding module, corresponding point generation module and decoupled singular value decomposition (SVD) module. In the transform-invariant feature extraction module, we design TIF in SE(3) (which means the 3D rigid transformation space) which contains a triangular feature and local density feature for points. It fully exploits the transformation invariance of point clouds, making the algorithm highly robust to rigid transformation. The deep feature embedding module embeds TIF into a high-dimension space using a deep neural network, further improving the expression ability of features. The corresponding point cloud is generated using an attention mechanism in the corresponding point generation module, and the final transformation for registration is calculated in the decoupled SVD module. In an experiment, we first train and evaluate the TIF-Reg method on the ModelNet40 dataset. The results show that our method keeps the root mean squared error (RMSE) of rotation within 0.5∘ and the RMSE of translation error close to 0 m, even when the rotation is up to [−180∘, 180∘] or the translation is up to [−20 m, 20 m]. We also test the generalization of our method on the TUM3D dataset using the model trained on Modelnet40. The results show that our method’s errors are close to the experimental results on Modelnet40, which verifies the good generalization ability of our method. All experiments prove that the proposed method is superior to state-of-the-art PCReg algorithms in terms of accuracy and complexity.


Author(s):  
D. Tosic ◽  
S. Tuttas ◽  
L. Hoegner ◽  
U. Stilla

<p><strong>Abstract.</strong> This work proposes an approach for semantic classification of an outdoor-scene point cloud acquired with a high precision Mobile Mapping System (MMS), with major goal to contribute to the automatic creation of High Definition (HD) Maps. The automatic point labeling is achieved by utilizing the combination of a feature-based approach for semantic classification of point clouds and a deep learning approach for semantic segmentation of images. Both, point cloud data, as well as the data from a multi-camera system are used for gaining spatial information in an urban scene. Two types of classification applied for this task are: 1) Feature-based approach, in which the point cloud is organized into a supervoxel structure for capturing geometric characteristics of points. Several geometric features are then extracted for appropriate representation of the local geometry, followed by removing the effect of local tendency for each supervoxel to enhance the distinction between similar structures. And lastly, the Random Forests (RF) algorithm is applied in the classification phase, for assigning labels to supervoxels and therefore to points within them. 2) The deep learning approach is employed for semantic segmentation of MMS images of the same scene. To achieve this, an implementation of Pyramid Scene Parsing Network is used. Resulting segmented images with each pixel containing a class label are then projected onto the point cloud, enabling label assignment for each point. At the end, experiment results are presented from a complex urban scene and the performance of this method is evaluated on a manually labeled dataset, for the deep learning and feature-based classification individually, as well as for the result of the labels fusion. The achieved overall accuracy with fusioned output is 0.87 on the final test set, which significantly outperforms the results of individual methods on the same point cloud. The labeled data is published on the TUM-PF Semantic-Labeling-Benchmark.</p>


Author(s):  
L. Gézero ◽  
C. Antunes

In the last few years, LiDAR sensors installed in terrestrial vehicles have been revealed as an efficient method to collect very dense 3D georeferenced information. The possibility of creating very dense point clouds representing the surface surrounding the sensor, at a given moment, in a very fast, detailed and easy way, shows the potential of this technology to be used for cartography and digital terrain models production in large scale. However, there are still some limitations associated with the use of this technology. When several acquisitions of the same area with the same device, are made, differences between the clouds can be observed. The range of that differences can go from few centimetres to some several tens of centimetres, mainly in urban and high vegetation areas where the occultation of the GNSS system introduces a degradation of the georeferenced trajectory. Along this article a different method point cloud registration is proposed. In addition to the efficiency and speed of execution, the main advantages of the method are related to the fact that the adjustment is continuously made over the trajectory, based on the GPS time. The process is fully automatic and only information recorded in the standard LAS files is used, without the need for any auxiliary information, in particular regarding the trajectory.


2021 ◽  
Vol 11 (10) ◽  
pp. 4538
Author(s):  
Jinbo Liu ◽  
Pengyu Guo ◽  
Xiaoliang Sun

When measuring surface deformation, because the overlap of point clouds before and after deformation is small and the accuracy of the initial value of point cloud registration cannot be guaranteed, traditional point cloud registration methods cannot be applied. In order to solve this problem, a complete solution is proposed, first, by fixing at least three cones to the target. Then, through cone vertices, initial values of the transformation matrix can be calculated. On the basis of this, the point cloud registration can be performed accurately through the iterative closest point (ICP) algorithm using the neighboring point clouds of cone vertices. To improve the automation of this solution, an accurate and automatic point cloud registration method based on biological vision is proposed. First, the three-dimensional (3D) coordinates of cone vertices are obtained through multi-view observation, feature detection, data fusion, and shape fitting. In shape fitting, a closed-form solution of cone vertices is derived on the basis of the quadratic form. Second, a random strategy is designed to calculate the initial values of the transformation matrix between two point clouds. Then, combined with ICP, point cloud registration is realized automatically and precisely. The simulation results showed that, when the intensity of Gaussian noise ranged from 0 to 1 mr (where mr denotes the average mesh resolution of the models), the rotation and translation errors of point cloud registration were less than 0.1° and 1 mr, respectively. Lastly, a camera-projector system to dynamically measure the surface deformation during ablation tests in an arc-heated wind tunnel was developed, and the experimental results showed that the measuring precision for surface deformation exceeded 0.05 mm when surface deformation was smaller than 4 mm.


Author(s):  
T. Sumi ◽  
H. Date ◽  
S. Kanai

In this paper, an efficient and robust registration method of multiple point clouds is proposed. In our research, we assume that point clouds are acquired by Terrestrial Laser Scanning (TLS) systems, and the scanned environments have a relatively flat base plane such as the ground or a floor. Our method is based on an existing pairwise registration method based on point projection images, which can quickly register the point clouds under the above assumptions. In the method, sliced point clouds are projected onto the base plane, and a binary image with feature points is created. The registration is done by using feature points of the images based on the sample consensus strategy. In this paper, first, we improve the efficiency of the pairwise registration method by introducing height and occlusion information to the image. Then, a validity check method of pairwise registration using space-classified images is proposed to avoid exhaustive pairwise registration in the multiple point cloud registration process. Finally, an efficient multiple point cloud registration algorithm based on progressive creation of a point cloud connectivity graph using iterative rough and precise pairwise registration and the validity check method is proposed. The effectiveness of our method is shown through its application to three datasets of outdoor environments.


Sign in / Sign up

Export Citation Format

Share Document