scholarly journals Multi-sensor point cloud data fusion for precise 3D mapping

Author(s):  
Mohamed Abdelazeem ◽  
Ahmed Elamin ◽  
Akram Afifi ◽  
Ahmed El-Rabbany
2020 ◽  
Vol 213 ◽  
pp. 03025
Author(s):  
Yan Wang ◽  
Tingting Zhang ◽  
Jingyi Wang

Three-dimensional point cloud data is a new form of three-dimensional collection, which not only contains the geometric topology information of the object, but also has high simplicity and flexibility. In this paper, the air-ground multi-source data fusion technology is used to study the fine reconstruction of the 3D scene: based on the 3D laser scanning laser point cloud, the 3D spatial information of the ground visible objects is obtained, and the orthophoto obtained by the drone aerial photography is assisted, Obtain the three-dimensional space information of the top of the ground feature, and the ground three-dimensional laser scanner can quickly obtain the three-dimensional surface information of the building facade, ground, and trees. Due to the complex structure of the building and the occlusion of spatial objects, sub-station scanning is required when acquiring point cloud data. This article uses the Sino-German Energy Conservation Center Building of Shenyang Jianzhu University as the research area, using drone tilt photography technology and ground lidar technology to integrate. During the experiment, the field industry adopted the UAV image acquisition strategy of “automatic shooting of regular routes, supplemented by manual shooting of areas of interest”; in the field industry, the method of “manual coarse registration and ICP algorithm fine registration” The example results show that the ground 3D laser point cloud air-ground image fusion 3D modeling effect proposed in this paper is better and the quality is greatly improved, which makes up for the ground 3D laser scanning. In point cloud modeling, a large number of holes are insufficient due to occlusion and missing top information.


Author(s):  
Xiao Zhang ◽  
Vignesh Suresh ◽  
Yi Zheng ◽  
Shaodong Wang ◽  
Qing Li ◽  
...  

Abstract Surface roughness is a significant parameter when evaluating the quality of products in the additive manufacturing (AM) industry. AM parts are fabricated layer by layer, which is quite different from traditional formative or subtractive methods. A uniform feature can be obtained along the direction of the AM printhead movement on the surface of manufactured components, and a large waviness value can be found in the direction perpendicular to printhead movement. This unique characteristic differentiates additive manufactured parts from casted or machined parts in the way of measuring and defining surface roughness. Therefore, it is necessary to set up new standards to measure surface roughness of AM parts and analyze the variation in the topographical profile. The most widely used instruments for measuring surface roughness are profilometer and laser scanner, but they cannot generate 3D topographical surfaces in real-time. In this work, two non-contact optical methods based on Focus Variation Microscopy (FVM) and Structured Light System (SLS) were adopted to measure the surface topography of the target components. The FVM captures images of objects at different focus levels. By translating the object’s position based on focus profile, a 3D image is obtained by data fusion. The lab-made microscopic SLS was used to perform simultaneous whole surface scanning with the potential to achieve real-time 3D surface reconstruction. The two optical metrology systems generated two totally different point cloud data sets. Limited research has been conducted to verify whether the point cloud data sets generated from different optical systems are following the same distribution. In this paper, a statistical method was applied to test the difference between two systems. By using data analytics approaches for comparison analysis, it was found that surface roughness based on the FVM and the SLS systems has no significant difference from a data fusion point of view, though point cloud data generated were completely different in values. In addition, this paper provided a standard measurement approach for a real-time, non-contact method to estimate the surface roughness of AM parts. The two metrology techniques can be applied for in-situ real-time surface analysis and process planning for AM.


2018 ◽  
Vol 8 ◽  
Author(s):  
Tyson L. Swetnam ◽  
Jeffrey K. Gillan ◽  
Temuulen T. Sankey ◽  
Mitchel P. McClaran ◽  
Mary H. Nichols ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5441
Author(s):  
Li Zheng ◽  
Zhukun Li

There are many sources of point cloud data, such as the point cloud model obtained after a bundle adjustment of aerial images, the point cloud acquired by scanning a vehicle-borne light detection and ranging (LiDAR), the point cloud acquired by terrestrial laser scanning, etc. Different sensors use different processing methods. They have their own advantages and disadvantages in terms of accuracy, range and point cloud magnitude. Point cloud fusion can combine the advantages of each point cloud to generate a point cloud with higher accuracy. Following the classic Iterative Closest Point (ICP), a virtual namesake point multi-source point cloud data fusion based on Fast Point Feature Histograms (FPFH) feature difference is proposed. For the multi-source point cloud with noise, different sampling resolution and local distortion, it can acquire better registration effect and improve the accuracy of low precision point cloud. To find the corresponding point pairs in the ICP algorithm, we use the FPFH feature difference, which can combine surrounding neighborhood information and have strong robustness to noise, to generate virtual points with the same name to obtain corresponding point pairs for registration. Specifically, through the establishment of voxels, according to the F2 distance of the FPFH of the target point cloud and the source point cloud, the convolutional neural network is used to output a virtual and more realistic and theoretical corresponding point to achieve multi-source Point cloud data registration. Compared with the ICP algorithm for finding corresponding points in existing points, this method is more reasonable and more accurate, and can accurately correct low-precision point cloud in detail. The experimental results show that the accuracy of our method and the best algorithm is equivalent under the clean point cloud and point cloud of different resolutions. In the case of noise and distortion in the point cloud, our method is better than other algorithms. For low-precision point cloud, it can better match the target point cloud in detail, with better stability and robustness.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4486 ◽  
Author(s):  
Miao Gong ◽  
Zhijiang Zhang ◽  
Dan Zeng ◽  
Tao Peng

Multisensor systems can overcome the limitation of measurement range of single-sensor systems, but often require complex calibration and data fusion. In this study, a three-dimensional (3D) measurement method of four-view stereo vision based on Gaussian process (GP) regression is proposed. Two sets of point cloud data of the measured object are obtained by gray-code phase-shifting technique. On the basis of the characteristics of the measured object, specific composite kernel functions are designed to obtain the initial GP model. In view of the difference of noise in each group of point cloud data, the weight idea is introduced to optimize the GP model, which is the data fusion based on Bayesian inference method for point cloud data. The proposed method does not require strict hardware constraints. Simulations for the curve and the high-order surface and experiments of complex 3D objects have been designed to compare the reconstructing accuracy of the proposed method and the traditional methods. The results show that the proposed method is superior to the traditional methods in measurement accuracy and reconstruction effect.


Sign in / Sign up

Export Citation Format

Share Document