scholarly journals Monocular Vision Ranging and Camera Focal Length Calibration

2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Lixia Xue ◽  
Meian Li ◽  
Liang Fan ◽  
Aixia Sun ◽  
Tian Gao

The camera calibration in monocular vision represents the relationship between the pixels’ units which is obtained from a camera and the object in the real world. As an essential procedure, camera calibration calculates the three-dimensional geometric information from the captured two-dimensional images. Therefore, a modified camera calibration method based on polynomial regression is proposed to simplify. In this method, a parameter vector is obtained by pixel coordinates of obstacles and corresponding distance values using polynomial regression. The set of parameter’s vectors can measure the distance between the camera and the ground object in the field of vision under the camera’s posture and position. The experimental results show that the lowest accuracy of this focal length calibration method for measurement is 97.09%, and the average accuracy was 99.02%.

Author(s):  
Zhaohui Zheng ◽  
Yong Ma ◽  
Hong Zheng ◽  
Yu Gu ◽  
Mingyu Lin

Purpose The welding areas of the workpiece must be consistent with high precision to ensure the welding success during the welding of automobile parts. The purpose of this paper is to design an automatic high-precision locating and grasping system for robotic arm guided by 2D monocular vision to meet the requirements of automatic operation and high-precision welding. Design/methodology/approach A nonlinear multi-parallel surface calibration method based on adaptive k-segment master curve algorithm is proposed, which improves the efficiency of the traditional single camera calibration algorithm and accuracy of calibration. At the same time, the multi-dimension feature of target based on k-mean clustering constraint is proposed to improve the robustness and precision of registration. Findings A method of automatic locating and grasping based on 2D monocular vision is provided for robot arm, which includes camera calibration method and target locating method. Practical implications The system has been integrated into the welding robot of an automobile company in China. Originality/value A method of automatic locating and grasping based on 2D monocular vision is proposed, which makes the robot arm have automatic grasping function, and improves the efficiency and precision of automatic grasp of robot arm.


Electronics ◽  
2018 ◽  
Vol 7 (12) ◽  
pp. 421 ◽  
Author(s):  
Gwon An ◽  
Siyeong Lee ◽  
Min-Woo Seo ◽  
Kugjin Yun ◽  
Won-Sik Cheong ◽  
...  

In this paper, we propose a Charuco board-based omnidirectional camera calibration method to solve the problem of conventional methods requiring overly complicated calibration procedures. Specifically, the proposed method can easily and precisely provide two-dimensional and three-dimensional coordinates of patterned feature points by arranging the omnidirectional camera in the Charuco board-based cube structure. Then, using the coordinate information of the feature points, an intrinsic calibration of each camera constituting the omnidirectional camera can be performed by estimating the perspective projection matrix. Furthermore, without an additional calibration structure, an extrinsic calibration of each camera can be performed, even though only part of the calibration structure is included in the captured image. Compared to conventional methods, the proposed method exhibits increased reliability, because it does not require additional adjustments to the mirror angle or the positions of several pattern boards. Moreover, the proposed method calibrates independently, regardless of the number of cameras comprising the omnidirectional camera or the camera rig structure. In the experimental results, for the intrinsic parameters, the proposed method yielded an average reprojection error of 0.37 pixels, which was better than that of conventional methods. For the extrinsic parameters, the proposed method had a mean absolute error of 0.90° for rotation displacement and a mean absolute error of 1.32 mm for translation displacement.


2014 ◽  
Vol 981 ◽  
pp. 364-367
Author(s):  
Guang Yu ◽  
Bo Yang Yu ◽  
Shu Cai Yang ◽  
Li Wen ◽  
Wen Fei Dong ◽  
...  

Projector calibration can be seen as a special case of the camera calibration. It can establish the relationship of the three dimensional space coordinates for points and projector image coordinates for points DMD by using a projector to project coding pattern. In camera calibration, ZHANG’s self-calibration was conducted in the maximum likelihood linear refinement. Operation process takes the lens distortion factors into account finding out the camera internal and external parameters finally. Using this algorithm to the projector calibration can solve the traditional linear calibration algorithm which is complex and poor robustness. Otherwise, it can improve the practicability of calibration method. This method can both calibrate the internal and external parameters of projector, which can solve the problem of independently inside or outside calibration.


2012 ◽  
Vol 38 (3) ◽  
pp. 106-110 ◽  
Author(s):  
Jūratė Sužiedelytė-Visockienė

The result of photogrammetry is a digital object or terrain images on the plane or in three-dimensional space. Precise data on the object is photocopied by a professional digital camera equipped with the calibrated system of optical lens (evaluation of distortion parameters for optical lens). Camera calibration is performed in a laboratory or employing special calibration software and using special field testing. However, Lithuania doesn't own similar laboratories. Therefore, an important point is obtaining proper software for the verification of these works. European countries have been using plenty of various software and different test field calibration (plate), including two-dimensional, three-dimensional, etc. Therefore, choosing the simplest, cheapest and most acceptable method of camera calibration is essential. Research was made applying the Canon EOS-1D Mark II (resolution - 21 million points, with the focal length of the lens reaching 21 mm). The optical system was calibrated using Tcc (Germany) and MatLab software. The calibration processes were done taking a different test field of camera calibration. The article analyzes calibration results and offers suggestions on camera calibration.


Author(s):  
Hidehiko Shishido ◽  
Itaru Kitahara

In sports science research, there are many topics that utilize the body motion of athletes extracted by motion capture system, since motion information is valuable data for improving an athlete’s skills. However, one of the unsolved challenges in motion capture is extraction of athletes’ motion information during the actual game or match, as placing markers on athletes is a challenge during game play. In this research, the authors propose a method for acquisition of motion information without attaching a marker, utilizing computer vision technology. In the proposed method, the three-dimensional world joint position of the athlete’s body can be acquired using just two cameras without any visual markers. Furthermore, the athlete’s three-dimensional joint position during game play can also be obtained without complicated preparations. Camera calibration that estimates the projective relationship between three-dimensional world and two-dimensional image spaces is one of the principal processes for the respective three-dimensional image processing, such as three-dimensional reconstruction and three-dimensional tracking. A strong-calibration method, which needs to set up landmarks with known three-dimensional positions, is a common technique. However, as the target space expands, landmark placement becomes increasingly complicated. Although a weak-calibration method does not need known landmarks, the estimation precision depends on the accuracy of the correspondence between image captures. When multiple cameras are arranged sparsely, sufficient detection of corresponding points is difficult. In this research, the authors propose a calibration method that bridges multiple sparsely distributed cameras using mobile camera images. Appropriate spacing was confirmed between the images through comparative experiments evaluating camera calibration accuracy by changing the number of bridging images. Furthermore, the proposed method was applied to multiple capturing experiments in a large-scale space to verify its robustness. As a relevant example, the proposed method was applied to the three-dimensional skeleton estimation of badminton players. Subsequently, a quantitative evaluation was conducted on camera calibration for the three-dimensional skeleton. The reprojection error of each part of the skeletons and standard deviations were approximately 2.72 and 0.81 mm, respectively, confirming that the proposed method was highly accurate when applied to camera calibration. Consequently, a quantitative evaluation was conducted on the proposed calibration method and a calibration method using the coordinates of eight manual points. In conclusion, the proposed method stabilizes calibration accuracy in the vertical direction of the world coordinate system.


2010 ◽  
Vol 29-32 ◽  
pp. 2692-2697
Author(s):  
Jiu Long Xiong ◽  
Jun Ying Xia ◽  
Xian Quan Xu ◽  
Zhen Tian

Camera calibration establishes the relationship between 2D coordinates in the image and 3D coordinates in the 3D world. BP neural network can model non-linear relationship, and therefore was used for calibrating camera by avoiding the non-linear factors of the camera in this paper. The calibration results are compared with the results of Tsai’s two stage method. The comparison show that calibration method based BP neural network improved the calibration accuracy.


Sign in / Sign up

Export Citation Format

Share Document