Deep Single Fisheye Image Camera Calibration for Over 180-degree Projection of Field of View

Author(s):  
Nobuhiko Wakai ◽  
Takayoshi Yamashita
Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1091
Author(s):  
Izaak Van Crombrugge ◽  
Rudi Penne ◽  
Steve Vanlanduit

Knowledge of precise camera poses is vital for multi-camera setups. Camera intrinsics can be obtained for each camera separately in lab conditions. For fixed multi-camera setups, the extrinsic calibration can only be done in situ. Usually, some markers are used, like checkerboards, requiring some level of overlap between cameras. In this work, we propose a method for cases with little or no overlap. Laser lines are projected on a plane (e.g., floor or wall) using a laser line projector. The pose of the plane and cameras is then optimized using bundle adjustment to match the lines seen by the cameras. To find the extrinsic calibration, only a partial overlap between the laser lines and the field of view of the cameras is needed. Real-world experiments were conducted both with and without overlapping fields of view, resulting in rotation errors below 0.5°. We show that the accuracy is comparable to other state-of-the-art methods while offering a more practical procedure. The method can also be used in large-scale applications and can be fully automated.


Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 3008 ◽  
Author(s):  
Zhe Liu ◽  
Zhaozong Meng ◽  
Nan Gao ◽  
Zonghua Zhang

Depth cameras play a vital role in three-dimensional (3D) shape reconstruction, machine vision, augmented/virtual reality and other visual information-related fields. However, a single depth camera cannot obtain complete information about an object by itself due to the limitation of the camera’s field of view. Multiple depth cameras can solve this problem by acquiring depth information from different viewpoints. In order to do so, they need to be calibrated to be able to accurately obtain the complete 3D information. However, traditional chessboard-based planar targets are not well suited for calibrating the relative orientations between multiple depth cameras, because the coordinates of different depth cameras need to be unified into a single coordinate system, and the multiple camera systems with a specific angle have a very small overlapping field of view. In this paper, we propose a 3D target-based multiple depth camera calibration method. Each plane of the 3D target is used to calibrate an independent depth camera. All planes of the 3D target are unified into a single coordinate system, which means the feature points on the calibration plane are also in one unified coordinate system. Using this 3D target, multiple depth cameras can be calibrated simultaneously. In this paper, a method of precise calibration using lidar is proposed. This method is not only applicable to the 3D target designed for the purposes of this paper, but it can also be applied to all 3D calibration objects consisting of planar chessboards. This method can significantly reduce the calibration error compared with traditional camera calibration methods. In addition, in order to reduce the influence of the infrared transmitter of the depth camera and improve its calibration accuracy, the calibration process of the depth camera is optimized. A series of calibration experiments were carried out, and the experimental results demonstrated the reliability and effectiveness of the proposed method.


Author(s):  
Yannick Hold-Geoffroy ◽  
Kalyan Sunkavalli ◽  
Jonathan Eisenmann ◽  
Matt Fisher ◽  
Emiliano Gambaretto ◽  
...  

Optik ◽  
2014 ◽  
Vol 125 (2) ◽  
pp. 844-849 ◽  
Author(s):  
Xieliu Yang ◽  
Suping Fang

2011 ◽  
Vol 403-408 ◽  
pp. 1451-1454 ◽  
Author(s):  
Wei Lu ◽  
Ting Ting Wang ◽  
Jing Hui Chu

The paper proposes a real-time monocular vision-based algorithm that can be applied to robot tracking. Unlike the models used in other papers, the model used in this paper is suitable for the condition that the camera is looking up. There is a common deficiency existing in previous models that the target point is constrained to some areas of the camera field. To overcome the shortcomings, the camera field of view is divided into three regions which also reduces the computational complexity. The intrinsic parameters of camera can be obtained by calibration. Pitch angle rectification, together with camera calibration, improves the ranging accuracy.


Author(s):  
Manuel Lopez ◽  
Roger Mari ◽  
Pau Gargallo ◽  
Yubin Kuang ◽  
Javier Gonzalez-Jimenez ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document