Rigorous Error Modeling for sUAS Acquired Image-Derived Point Clouds

2019 ◽  
Vol 57 (8) ◽  
pp. 6240-6253 ◽  
Author(s):  
Craig A. Rodarmel ◽  
Mark P. Lee ◽  
Katherine L. Brodie ◽  
Nicholas J. Spore ◽  
Brittany Bruder
2021 ◽  
Vol 13 (16) ◽  
pp. 3269
Author(s):  
Reza Maalek ◽  
Derek D. Lichti

Projective transformation of spheres onto images produce ellipses, whose centers do not coincide with the projected center of the sphere. This results in an eccentricity error, which must be treated in high precision metrology. This article provides closed formulations for modeling this error in images to enable 3-dimensional (3D) reconstruction of the center of spherical objects. The article also provides a new direct robust method for detecting spherical pattern in point clouds. It was shown that the eccentricity error in an image has only one component in the direction of the major axis of the ellipse. It was also revealed that the eccentricity is zero if and only if the center of the projected sphere lies on the camera’s perspective center. The effectiveness of the robust sphere detection and the eccentricity error modeling method was evaluated on simulated point clouds of spheres and real-world images, respectively. It was observed that the proposed robust sphere fitting method outperformed the popular M-estimator sample consensus in terms of radius and center estimation accuracy by a factor of 13, and 14 on average, respectively. Using the proposed eccentricity adjustment, the estimated 3D center of the sphere using modeled eccentricity was superior to the unmodeled case. It was also observed that the accuracy of the estimated 3D center using modeled eccentricity continuously improved as the number of images increased, whereas the unmodeled eccentricity did not show improvements after eight image views. The results of the investigation show that: (i) the proposed method effectively modeled the eccentricity error, and (ii) the effects of eliminating the eccentricity error in the 3D reconstruction become even more pronounced in a larger number of image views.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2020 ◽  
Vol 28 (10) ◽  
pp. 2301-2310
Author(s):  
Chun-kang ZHANG ◽  
◽  
Hong-mei LI ◽  
Xia ZHANG

Sign in / Sign up

Export Citation Format

Share Document