Distance Measurement Using a Stereo Vision System

2013 ◽  
Vol 196 ◽  
pp. 189-197 ◽  
Author(s):  
Bogdan Żak ◽  
Stanisław Hożyń

In this paper the attempt to make an analysis of distance measurement using a stereo vision system was presented. Main emphasis was placed on the geometric camera calibration. The classical method based on the specially prepared calibration pattern with known dimensions and position in a certain coordinates system was performed. Finally, the metric information obtained from images was presented.

2003 ◽  
Vol 15 (3) ◽  
pp. 304-313 ◽  
Author(s):  
Atsushi Yamashita ◽  
◽  
Toru Kaneko ◽  
Shinya Matsushita ◽  
Kenjiro T. Miura ◽  
...  

In this paper, we propose a fast, easy camera calibration and 3-D measurement method with an active stereo vision system for handling moving objects whose geometric models are known. We use stereo cameras that change direction independently to follow moving objects. To gain extrinsic camera parameters in real time, a baseline stereo camera (parallel stereo camera) model and projective transformation of stereo images are used by considering epipolar constraints. To make use of 3-D measurement results for a moving object, the manipulator hand approaches the object. When the manipulator hand and object are near enough to be situated in a single image, very accurate camera calibration is executed to calculate the manipulator size in the image. Our calibration is simple and practical because it does not need to calibrate all camera parameters. The computation time for real-time calibration is not large because we need only search for one parameter in real time by deciding the relationship between all parameters in advance. Our method does not need complicated image processing or matrix calculation. Experimental results show that the accuracy of 3-D reconstruction of a cubic box whose edge is 60 mm long is within 1.8 mm when the distance between the camera and the box is 500 mm. Total computation time for object tracking, camera calibration, and manipulation control is within 0.5 seconds.


2021 ◽  
Vol 11 (20) ◽  
pp. 9384
Author(s):  
Yan Liu ◽  
Zhendong Ge ◽  
Yingtao Yuan ◽  
Xin Su ◽  
Xiang Guo ◽  
...  

The stereo-vision system plays an increasingly important role in various fields of research and applications. However, inevitable slight movements of cameras under harsh working conditions can significantly influence the 3D measurement accuracy. This paper focuses on the effect of camera movements on the stereo-vision 3D measurement. The camera movements are divided into four categories, viz., identical translations and rotations, relative translation and rotation. The error models of 3D coordinate and distance measurement are established. Experiments were performed to validate the mathematical models. The results show that the 3D coordinate error caused by identical translations increases linearly with the change in the positions of both cameras, but the distance measurement is not affected. For identical rotations, the 3D coordinate error introduced only in the rotating plane is proportional to the rotation angle within 10° while the distance error is zero. For relative translation, both coordinate and distance errors keep linearly increasing with the change in the relative positions. For relative rotation, the relationship between 3D coordinate error and rotation angle can be described as the nonlinear trend similar to a sine-cosine curve. The impact of the relative rotation angle on distance measurement accuracy does not increase monotonically. The relative rotation is the main factor compared to other cases. Even for the occurrence of a rotation angle of 10°, the resultant maximum coordinate error is up to 2000 mm, and the distance error reaches 220%. The results presented are recommended as practice guidelines to reduce the measurement errors.


Author(s):  
Paweł Rotter ◽  
Witold Byrski ◽  
Michał Dajda ◽  
Grzegorz Łojek

AbstractIn the double-plane method for stereo vision system calibration, the correspondence between screen coordinates and location in 3D space is calculated based on four plane-to-plane transformations; there are two planes of the calibration pattern and two cameras. The method is intuitive, and easy to implement, but, the main disadvantage is ill-conditioning for some spatial locations. In this paper we propose a method which exploits the third plane which physically does not belong to the calibration pattern, but can be calculated from the set of reference points. Our algorithm uses a combination of three calibration planes, with weights which depend on screen coordinates of the point of interest; a pair of planes which could cause numerical errors receives small weights and have practically no influence on the final results. We analyse errors, and their distribution in 3D space, for the basic and the improved algorithm. Experiments demonstrate high accuracy and reliability of our method compared to the basic version; root mean square error and maximum error, are reduced by factors of 4 and 20 respectively.


Sign in / Sign up

Export Citation Format

Share Document