Efficient and Accurate Camera Calibration Based on Planar Pattern

2011 ◽  
Vol 204-210 ◽  
pp. 1258-1261
Author(s):  
Dan Xia ◽  
De Hua Li ◽  
Sheng Yong Xu

We describe an effective method for calibrating cameras by using planar calibration patterns. The calibration pattern control points are localized by Harris detector incorporating the gradient histogram. The accuracy of the calibration control points location consequently improves the accuracy of the camera calibration. Additionally, optimization computation is carried out for increasing the accuracy of camera calibration results. Experiments using real images verified the effectiveness and robustness of the proposed method.

2017 ◽  
Vol 84 (7-8) ◽  
Author(s):  
Hendrik Schilling ◽  
Maximilian Diebold ◽  
Marcel Gutsche ◽  
Bernd Jähne

AbstractCamera calibration, crucial for computer vision tasks, often relies on planar calibration targets to calibrate the camera parameters. This work describes the design of a planar, fractal, self-identifying calibration pattern, which provides a high density of calibration points for a large range of magnification factors. An evaluation on ground truth data shows that the target provides very high accuracy over a wide range of conditions.


2008 ◽  
Vol 05 (01) ◽  
pp. 41-50 ◽  
Author(s):  
ZHIGANG ZHENG ◽  
ZHENGJUN ZHA ◽  
LONG HAN ◽  
ZENGFU WANG

This paper addresses the problem of highly accurate, highly speedy, more reliable and fully automatic camera calibration. Our objective is to construct a reliable and fully automatic system to supply a more robust and highly accurate calibration scheme. A checkerboard pattern is used as calibration pattern. After the corner points on image are detected, an improved Delaunay triangulation based algorithm is used to make correspondences between corner points on image and corner points on checkerboard in 3D space. In order to determine precise position of the actual corner points, a geometrical constraint based global curve fitting algorithm has been developed. The experimental results show that the geometrical constraint based method can improve remarkably the performance of the feature detection and camera calibration.


2013 ◽  
Vol 427-429 ◽  
pp. 1939-1943 ◽  
Author(s):  
Qian Bian ◽  
Sui Yang Chen ◽  
Yang Chuan Liu

During camera calibration, the calibration pattern image is always skew. This brings much difficult to feature points sorting, which affects calibration accuracy. In this study, a rotation based sorting method is proposed. First, detect the skew angle accurately; then, transform the original coordinates to the rotated coordinates and establish the mapping relation; then, sort the rotated coordinates; finally, sort the original coordinates using the mapping relation. To verify the feasibility of this method, an experiment is carried out. The result shows that the rotation based sorting method can sort the feature points accurately at different skew angles. Its accuracy makes this method suitable for high accurate camera calibration.


Author(s):  
M. Dahaghin ◽  
F. Samadzadegan ◽  
F. Dadras Javan

Abstract. Thermography is a robust method for detecting thermal irregularities on the roof of the buildings as one of the main energy dissipation parts. Recently, UAVs are presented to be useful in gathering 3D thermal data of the building roofs. In this topic, the low spatial resolution of thermal imagery is a challenge which leads to a sparse resolution in point clouds. This paper suggests the fusion of visible and thermal point clouds to generate a high-resolution thermal point cloud of the building roofs. For the purpose, camera calibration is performed to obtain internal orientation parameters, and then thermal point clouds and visible point clouds are generated. In the next step, both two point clouds are geo-referenced by control points. To extract building roofs from the visible point cloud, CSF ground filtering is applied, and the vegetation layer is removed by RGBVI index. Afterward, a predefined threshold is applied to the normal vectors in the z-direction in order to separate facets of roofs from the walls. Finally, the visible point cloud of the building roofs and registered thermal point cloud are combined and generate a fused dense point cloud. Results show mean re-projection error of 0.31 pixels for thermal camera calibration and mean absolute distance of 0.2 m for point clouds registration. The final product is a fused point cloud, which its density improves up to twice of the initial thermal point cloud density and it has the spatial accuracy of visible point cloud along with thermal information of the building roofs.


Sign in / Sign up

Export Citation Format

Share Document