stereo camera
Recently Published Documents


TOTAL DOCUMENTS

813
(FIVE YEARS 155)

H-INDEX

32
(FIVE YEARS 4)

2021 ◽  
Vol 57 (2) ◽  
pp. 025006
Author(s):  
Sigit Ristanto ◽  
Waskito Nugroho ◽  
Eko Sulistya ◽  
Gede B Suparta

Abstract Measuring the 3D position at any time of a given object in real-time automatically and well documented to understand a physical phenomenon is essential. Exploring a stereo camera in developing 3D images is very intriguing since a 3D image perception generated by a stereo image may be reprojected back to generate a 3D object position at a specific time. This research aimed to develop a device and measure the 3D object position in real-time using a stereo camera. The device was constructed from a stereo camera, tripod, and a mini-PC. Calibration was carried out for position measurement in X, Y, and Z directions based on the disparity in the two images. Then, a simple 3D position measurement was carried out based on the calibration results. Also, whether the measurement was in real-time was justified. By applying template matching and triangulation algorithms on those two images, the object position in the 3D coordinate was calculated and recorded automatically. The disparity resolution characteristic of the stereo camera was obtained varied from 132 pixels to 58 pixels for an object distance to the camera from 30 cm to 70 cm. This setup could measure the 3D object position in real-time with an average delay time of less than 50 ms, using a notebook and a mini-PC. The 3D position measurement can be performed in real-time along with automatic documentation. Upon the stereo camera specifications used in this experiment, the maximum accuracy of the measurement in X, Y, and Z directions are ΔX = 0.6 cm, ΔY = 0.2 cm, and ΔZ = 0.8 cm at the measurement range of 30 cm–60 cm. This research is expected to provide new insights in the development of laboratory tools for learning physics, especially mechanics in schools and colleges.


Author(s):  
Sigit Ristanto ◽  
Waskito Nugroho ◽  
Eko Sulistya ◽  
Gede Bayu Suparta

2021 ◽  
Vol 243 ◽  
pp. 106067
Author(s):  
Matthew R. Baker ◽  
Kresimir Williams ◽  
H.G. Greene ◽  
Casey Greufe ◽  
Heather Lopes ◽  
...  

2021 ◽  
Author(s):  
Wei Wang ◽  
Hu Sun ◽  
Yuqiang Jin ◽  
Minglei Fu ◽  
Kun Li ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7048
Author(s):  
Yinming Miao ◽  
Masahiro Yamaguchi

Direct visual odometry algorithms assume that every frame from the camera has the same photometric characteristics. However, the cameras with auto exposure are widely used outdoors as the environment often changes. The vignetting also affects the pixel’s brightness on different frames, even if the exposure time is fixed. We propose an online vignetting correction and exposure time estimation method for stereo direct visual odometry algorithms. Our method works on a camera that has a gamma-like response function. The inverse vignetting function and exposure time ratio between neighboring frames are estimated. Stereo matching is used to select correspondences between the left image and right image in the same frame at the initialization step. Feature points are used to pick the correspondences between different frames. Our method provides static correction results during the experiments on datasets and a stereo camera.


2021 ◽  
Vol 40 (2) ◽  
Author(s):  
Christopher N. Rooper ◽  
Shaun MacNeill ◽  
Lynn Lee ◽  
Dan McNeill ◽  
Kae Lynne Yamanaka ◽  
...  

2021 ◽  
Vol 11 (18) ◽  
pp. 8464
Author(s):  
Adam L. Kaczmarek ◽  
Bernhard Blaschitz

This paper presents research on 3D scanning by taking advantage of a camera array consisting of up to five adjacent cameras. Such an array makes it possible to make a disparity map with a higher precision than a stereo camera, however it preserves the advantages of a stereo camera such as a possibility to operate in wide range of distances and in highly illuminated areas. In an outdoor environment, the array is a competitive alternative to other 3D imaging equipment such as Structured-light 3D scanners or Light Detection and Ranging (LIDAR). The considered kinds of arrays are called Equal Baseline Camera Array (EBCA). This paper presents a novel approach to calibrating the array based on the use of self-calibration methods. This paper also introduces a testbed which makes it possible to develop new algorithms for obtaining 3D data from images taken by the array. The testbed was released under open-source. Moreover, this paper shows new results of using these arrays with different stereo matching algorithms including an algorithm based on a convolutional neural network and deep learning technology.


2021 ◽  
Vol 15 (03) ◽  
pp. 337-357
Author(s):  
Alexander Julian Golkowski ◽  
Marcus Handte ◽  
Peter Roch ◽  
Pedro J. Marrón

For many application areas such as autonomous navigation, the ability to accurately perceive the environment is essential. For this purpose, a wide variety of well-researched sensor systems are available that can be used to detect obstacles or navigation targets. Stereo cameras have emerged as a very versatile sensing technology in this regard due to their low hardware cost and high fidelity. Consequently, much work has been done to integrate them into mobile robots. However, the existing literature focuses on presenting the concepts and algorithms used to implement the desired robot functions on top of a given camera setup. As a result, the rationale and impact of choosing this camera setup are usually neither discussed nor described. Thus, when designing the stereo camera system for a mobile robot, there is not much general guidance beyond isolated setups that worked for a specific robot. To close the gap, this paper studies the impact of the physical setup of a stereo camera system in indoor environments. To do this, we present the results of an experimental analysis in which we use a given software setup to estimate the distance to an object while systematically changing the camera setup. Thereby, we vary the three main parameters of the physical camera setup, namely the angle and distance between the cameras as well as the field of view and a rather soft parameter, the resolution. Based on the results, we derive several guidelines on how to choose the parameters for an application.


Sign in / Sign up

Export Citation Format

Share Document