Approach using sparse bundle adjustment for system calibration of fringe projective 3D profile sensor

Author(s):  
Jian Luo ◽  
Jiahu Yuan ◽  
Jinling Chen ◽  
Ai Xiong
Sensors ◽  
2018 ◽  
Vol 18 (11) ◽  
pp. 3949 ◽  
Author(s):  
Wei Li ◽  
Mingli Dong ◽  
Naiguang Lu ◽  
Xiaoping Lou ◽  
Peng Sun

An extended robot–world and hand–eye calibration method is proposed in this paper to evaluate the transformation relationship between the camera and robot device. This approach could be performed for mobile or medical robotics applications, where precise, expensive, or unsterile calibration objects, or enough movement space, cannot be made available at the work site. Firstly, a mathematical model is established to formulate the robot-gripper-to-camera rigid transformation and robot-base-to-world rigid transformation using the Kronecker product. Subsequently, a sparse bundle adjustment is introduced for the optimization of robot–world and hand–eye calibration, as well as reconstruction results. Finally, a validation experiment including two kinds of real data sets is designed to demonstrate the effectiveness and accuracy of the proposed approach. The translation relative error of rigid transformation is less than 8/10,000 by a Denso robot in a movement range of 1.3 m × 1.3 m × 1.2 m. The distance measurement mean error after three-dimensional reconstruction is 0.13 mm.


2020 ◽  
Vol 12 (3) ◽  
pp. 351 ◽  
Author(s):  
Seyyed Meghdad Hasheminasab ◽  
Tian Zhou ◽  
Ayman Habib

Acquired imagery by unmanned aerial vehicles (UAVs) has been widely used for three-dimensional (3D) reconstruction/modeling in various digital agriculture applications, such as phenotyping, crop monitoring, and yield prediction. 3D reconstruction from well-textured UAV-based images has matured and the user community has access to several commercial and opensource tools that provide accurate products at a high level of automation. However, in some applications, such as digital agriculture, due to repetitive image patterns, these approaches are not always able to produce reliable/complete products. The main limitation of these techniques is their inability to establish a sufficient number of correctly matched features among overlapping images, causing incomplete and/or inaccurate 3D reconstruction. This paper provides two structure from motion (SfM) strategies, which use trajectory information provided by an onboard survey-grade global navigation satellite system/inertial navigation system (GNSS/INS) and system calibration parameters. The main difference between the proposed strategies is that the first one—denoted as partially GNSS/INS-assisted SfM—implements the four stages of an automated triangulation procedure, namely, imaging matching, relative orientation parameters (ROPs) estimation, exterior orientation parameters (EOPs) recovery, and bundle adjustment (BA). The second strategy— denoted as fully GNSS/INS-assisted SfM—removes the EOPs estimation step while introducing a random sample consensus (RANSAC)-based strategy for removing matching outliers before the BA stage. Both strategies modify the image matching by restricting the search space for conjugate points. They also implement a linear procedure for ROPs’ refinement. Finally, they use the GNSS/INS information in modified collinearity equations for a simpler BA procedure that could be used for refining system calibration parameters. Eight datasets over six agricultural fields are used to evaluate the performance of the developed strategies. In comparison with a traditional SfM framework and Pix4D Mapper Pro, the proposed strategies are able to generate denser and more accurate 3D point clouds as well as orthophotos without any gaps.


Author(s):  
Kai Cordes ◽  
Mark Hockner ◽  
Hanno Ackermann ◽  
Bodo Rosenhahn ◽  
Jorn Ostermann

Author(s):  
W. Choi ◽  
C. Kim ◽  
Y. Kim

Recently, interests in 3D indoor modeling and positioning have been growing. Data fusion by using different sensors data is one of the 3D model producing methods. For a data fusion between two kinds of sensors, precise system calibration is essential. If relative geometric location of each sensor can be accurately measured with a system-calibration, it is possible to locate a pixel that corresponds to the same object in two different images, and thus, produce a more precise data-fusion. Purpose of this study is finding more efficient method of system calibration between optical and range sensor. For this purpose, experiment was designed by considering following variables, i) system calibration method, ii) testbed type, iii) and distance data(whether use it or not). So, In this study, test-bed for system calibration was designed by considering the characteristics of sensors. Also, precise simulation was done to find efficient method of system calibration, and its results were reflected in real experiment. Results of simulation show that the bundle adjustment method is more efficient than single photo resection in system calibration between range and optical sensors. And the most efficient case was when using i) the bundle adjustment with ii) the simulated data set which were obtained between 2m to 4m away from the test-bed. These results of simulation were reflected in real system calibration. Finally, real system calibration were performed and its results were compared to results of simulation. And accuracy of system calibration was evaluated by producing fusion data between range and optical sensors.


Sign in / Sign up

Export Citation Format

Share Document