An Experimental Analysis of the Effects of Different Hardware Setups on Stereo Camera Systems

2021 ◽  
Vol 15 (03) ◽  
pp. 337-357
Author(s):  
Alexander Julian Golkowski ◽  
Marcus Handte ◽  
Peter Roch ◽  
Pedro J. Marrón

For many application areas such as autonomous navigation, the ability to accurately perceive the environment is essential. For this purpose, a wide variety of well-researched sensor systems are available that can be used to detect obstacles or navigation targets. Stereo cameras have emerged as a very versatile sensing technology in this regard due to their low hardware cost and high fidelity. Consequently, much work has been done to integrate them into mobile robots. However, the existing literature focuses on presenting the concepts and algorithms used to implement the desired robot functions on top of a given camera setup. As a result, the rationale and impact of choosing this camera setup are usually neither discussed nor described. Thus, when designing the stereo camera system for a mobile robot, there is not much general guidance beyond isolated setups that worked for a specific robot. To close the gap, this paper studies the impact of the physical setup of a stereo camera system in indoor environments. To do this, we present the results of an experimental analysis in which we use a given software setup to estimate the distance to an object while systematically changing the camera setup. Thereby, we vary the three main parameters of the physical camera setup, namely the angle and distance between the cameras as well as the field of view and a rather soft parameter, the resolution. Based on the results, we derive several guidelines on how to choose the parameters for an application.

Author(s):  
Shojiro Ishibashi ◽  
Hiroshi Yoshida ◽  
Tadahiro Hyakudome

The visual information is very important for the operation of an underwater vehicle such as a manned vehicle and a remotely operated vehicle (ROV). And it will be also essential for functions which should be applied to an autonomous underwater vehicle (AUV) for the next generation. Generally, it is got by optical sensors, and most underwater vehicles are equipped with various types of them. Above all, camera systems are applied as multiple units to the underwater vehicles. And they can construct a stereo camera system. In this paper, some new functions, which provide some type of visual information derived by the stereo vision system, are described. And methods to apply the visual information to the underwater vehicle and their utility are confirmed.


Author(s):  
A. G. Chibunichev ◽  
A. P. Makarov ◽  
E. V. Poliakova

Abstract. The paper considers the possibility of using low-cost stereo cameras for autonomous robot navigation. An low-cost stereo camera with a focal length of 5 mm and a photo base of 6 cm was chosen for the research. Experimental studies have shown that the accuracy of determining the coordinates of object points from a pair of images obtained by such a stereo camera is sufficient for organizing autonomous navigation of the robot. In order to improve the reliability and accuracy of determining the trajectory of the robot, this paper proposes to use two stereo cameras. One is directed forward by the robot's movement, and the other is directed at the nadir. Thus, the trajectory is determined twice, independently of each other. Moreover, each case has its own algorithm for finding the homologue points. In the first case, a sparse point cloud is constructed for each stereo pair based on the selection of interesting points and their identification based on the comparison of descriptors. In addition, blunder detection of points identification are realized based on the analysis of the values of the relative orientation equations using the fundamental matrix. In the second case, when the stereo camera is pointed at the nadir, the usual method of correlation is used in the nodes of the grid specified at one image. Experimental studies have shown sufficient efficiency of autonomous navigation of mobile robot based on the use of two stereo cameras.


Author(s):  
D. Holdener ◽  
S. Nebiker ◽  
S. Blaser

The demand for capturing indoor spaces is rising with the digitalization trend in the construction industry. An efficient solution for measuring challenging indoor environments is mobile mapping. Image-based systems with 360° panoramic coverage allow a rapid data acquisition and can be processed to georeferenced 3D images hosted in cloud-based 3D geoinformation services. For the multiview stereo camera system presented in this paper, a 360° coverage is achieved with a layout consisting of five horizontal stereo image pairs in a circular arrangement. The design is implemented as a low-cost solution based on a 3D printed camera rig and action cameras with fisheye lenses. The fisheye stereo system is successfully calibrated with accuracies sufficient for the applied measurement task. A comparison of 3D distances with reference data delivers maximal deviations of 3 cm on typical distances in indoor space of 2-8 m. Also the automatic computation of coloured point clouds from the stereo pairs is demonstrated.


Author(s):  
A. Hanel ◽  
L. Hoegner ◽  
U. Stilla

Stereo camera systems in cars are often used to estimate the distance of other road users from the car. This information is important to improve road safety. Such camera systems are typically mounted behind the windshield of the car. In this contribution, the influence of the windshield on the estimated distance values is analyzed. An offline stereo camera calibration is performed with a moving planar calibration target. In a standard procedure bundle adjustment, the relative orientation of the cameras is estimated. The calibration is performed for the identical stereo camera system with and without a windshield in between. The base lengths are derived from the relative orientation in both cases and are compared. Distance values are calculated and analyzed. It can be shown, that the difference of the base length values in the two cases is highly significant. Resulting effects on the distance calculation up to a half meter occur.


2020 ◽  
Vol 17 (2) ◽  
pp. 172988142090960
Author(s):  
Shang Erke ◽  
Dai Bin ◽  
Nie Yiming ◽  
Xiao Liang ◽  
Zhu Qi

Outdoor surveillance and security robots have a wide range of industrial, military, and civilian applications. In order to achieve autonomous navigation, the LiDAR-camera system is widely applied by outdoor surveillance and security robots. The calibration of the LiDAR-camera system is essential and important for robots to correctly acquire the scene information. This article proposes a fast calibration approach that is different from traditional calibration algorithms. The proposed approach combines two independent calibration processes, which are the calibration of LiDAR and the camera to robot platform, so as to address the relationship between LiDAR sensor and camera sensor. A novel approach to calibrate LiDAR to robot platform is applied to improve accuracy and robustness. A series of indoor experiments are carried out and the results show that the proposed approach is effective and efficient. At last, it is applied to our own outdoor security robot platform to detect both positive and negative obstacles in a field environment, in which two Velodyne-HDL-32 LiDARs and a color camera are employed. The real application illustrates the robustness performance of the proposed approach.


Author(s):  
Bruno M. F. da Silva ◽  
Rodrigo S. Xavier ◽  
Luiz M. G. Gonçalves

Since it was proposed, the Robot Operating System (ROS) has fostered solutions for various problems in robotics in the form of ROS packages. One of these problems is Simultaneous Localization and Mapping (SLAM), a problem solved by computing the robot pose and a map of its environment of operation at the same time. The increasingly availability of robot kits ready to be programmed and also of RGB-D sensors often pose the question of which SLAM package should be used given the application requirements. When the SLAM subsystem must deliver estimates for robot navigation, as is the case of applications involving autonomous navigation, this question is even more relevant. This work introduces an experimental analysis of GMapping and RTAB-Map, two ROS compatible SLAM packages, regarding their SLAM accuracy, quality of produced maps and use of produced maps in navigation tasks. Our analysis aims ground robots equipped with RGB-D sensors for indoor environments and is supported by experiments conducted on datasets from simulation, benchmarks and from our own robot.


2019 ◽  
Author(s):  
José A. Diaz Amado ◽  
Jean Amaro ◽  
Iago P. Gomes ◽  
Denis Wolf ◽  
F. S. Osorio

This work aims to present an autonomous vehicle navigation system, based on an End-to-End Deep Learning approach, and to study the impact of different image input configurations to the system performance. The proposed methodology in this work was to adoptand test different configurations of RGB and Depth images captured from a Kinect device. We adopted a multi-camera system, composed by 3 cameras, with different RGB and/or Depth input configurations. Two main systems were developed in order to study and validade de different input configurations: the first one based on a realistic simulator and the second one based on a mini-car (small scale vehicle). Starting with the simulations, it was possible to choose the best camera/input configuration, then we validated that using the real vehicle (mini-car) with real sensors/cameras. The experimental results demonstrated that a multi-camera solution, based on 3 cameras, allow us to obtain better autonomous navigation control results in a End-to-End Deep Learning based approch, with a very small final error when using the proposed camera configurations.


Author(s):  
M. Mohammadi ◽  
A. Khami ◽  
F. Rottensteiner ◽  
I. Neumann ◽  
C. Heipke

Abstract. Multi-view camera systems are used more and more frequently for applications in close-range photogrammetry, engineering geodesy and autonomous navigation, since they can cover a large portion of the environment and are considerably cheaper than alternative sensors such as laser scanners. In many cases, the cameras do not have overlapping fields of view. In this paper, we report on the development of such a system mounted on a rigid aluminium platform, and focus on its geometric system calibration. We present an approach for estimating the exterior orientation of such a multi-camera system based on bundle adjustment. We use a static environment with ground control points, which are related to the platform via a laser tracker. In the experimental part, the precision and partly accuracy that can be achieved in different scenarios is investigated. While we show that the accuracy potential of the platform is very high, the mounting calibration parameters are not necessarily precise enough to be used as constant values after calibration. However, this disadvantage can be mitigated by using those parameters as observations and refining them on-the-job.


2012 ◽  
Vol 579 ◽  
pp. 435-444 ◽  
Author(s):  
Liang Chia Chen ◽  
Nguyen Van Thai

For three-dimensional (3-D) mapping, so far, 3-D laser scanners and stereo camera systems are used widely due to their high measurement range and accuracy. For stereo camera systems, establishing corresponding point pairs between two images is one crucial step for reconstructing depth information. However, mapping approaches using laser scanners are still restricted by a serious constraint by accurate image registration and mapping. In recent years, time-of-flight (ToF) cameras have been used for mapping tasks in providing high frame rates while preserving a compact size, but lack in measurement precision and robustness. To address the current technological bottleneck, this article presents a 3-D mapping method which employs an RGB-D camera for 3-D data acquisition and then applies the RGB-D features alignment (RGBD-FA) for data registration. Experimental results show the feasibility and robustness of applying the proposed approach for real-time 3-D mapping for large-scale indoor environments.


Author(s):  
A. Hanel ◽  
L. Hoegner ◽  
U. Stilla

Stereo camera systems in cars are often used to estimate the distance of other road users from the car. This information is important to improve road safety. Such camera systems are typically mounted behind the windshield of the car. In this contribution, the influence of the windshield on the estimated distance values is analyzed. An offline stereo camera calibration is performed with a moving planar calibration target. In a standard procedure bundle adjustment, the relative orientation of the cameras is estimated. The calibration is performed for the identical stereo camera system with and without a windshield in between. The base lengths are derived from the relative orientation in both cases and are compared. Distance values are calculated and analyzed. It can be shown, that the difference of the base length values in the two cases is highly significant. Resulting effects on the distance calculation up to a half meter occur.


Sign in / Sign up

Export Citation Format

Share Document