scholarly journals Constraints for Time-Multiplexed Structured Light with a Hand-held Camera

2021 ◽  
Vol 6 (1) ◽  
pp. 1-3
Author(s):  
Sepehr Ghavam ◽  
Matthew Post ◽  
Mohamed A. Naiel ◽  
Mark Lamm ◽  
Paul Fieguth

Multi-frame structured light in projector-camera systems affords high-density and non-contact methods of 3D surface reconstruction. However, they have strict setup constraints which can become expensive and time-consuming. Here, we investigate the conditions under which a projective homography can be used to compensate for small perturbations in pose caused by a hand-held camera. We synthesize data using a pinhole camera model and use it to determine the average 2D reprojection error per point correspondence. This error map is grouped into regions with specified upper-bounds to classify which regions produce sufficiently minimal error to be considered feasible for a structured-light projector-camera system with a hand-held camera. Empirical results demonstrate that a sub-pixel reprojection accuracy is achievable with a feasible geometric constraints

Author(s):  
D. Dahlke ◽  
M. Geßner ◽  
H. Meißner ◽  
K. Stebner ◽  
D. Grießbach ◽  
...  

<p><strong>Abstract.</strong> This paper presents a laboratory approach for geometric calibration of airborne camera systems. The setup uses an incoming laser beam, which is split by Diffractive Optical Elements (DOE) into a number of beams with precisely-known propagation directions. Each point of the diffraction pattern represents a point at infinity and is invariant against translation. A single image is sufficient to allow a complete camera calibration in accordance with classical camera calibration methods using the pinhole camera model and a distortion model. The presented method is time saving, since complex bundle adjustment procedures with several images are not necessary. It is well suited for the use with frame camera systems, but it works in principle also for pushbroom scanners. In order to prove the reliability, a conventional test field calibration is compared against the presented approach, showing that all estimated camera parameters are just insignificantly different. Furthermore a test flight over the Zeche Zollern reference target has been conducted. The aerotriangulation results shows that calibrating an airborne camera system with DOE is a feasible solution.</p>


Author(s):  
Thieu Quang Minh Nhat ◽  
Hyeung-Sik Choi ◽  
Mai The Vu ◽  
Joono Sur ◽  
Jin-Il Kang ◽  
...  

Detecting the relative position of the docking station is a very important issue for the homing of AUVs (Autonomous Unmanned Vehicles). To detect the position of the light source, a pinhole camera model structure was proposed like the camera model. However, due to the sensor resolution and the distortion errors of the pinhole camera system, the application of the camera of docking the under turbid sea environments is almost impossible. In this paper, a new method detecting the position of the docking station using a light source is presented. Also, a newly developed optical sensor which makes it much easier to sense the light source than the camera system for homing of the AUV under the water is performed. In addition, to improve the system, a neural network (NN) algorithm constructing a model relating the light inputs and optical sensor which are developed in this study is proposed. To evaluate the performance of the NN algorithm, the experiments were performed in the air beforehand. The result shows that the NN algorithm with AUV docking system using the NN model is better than the pinhole camera model.


2018 ◽  
Vol 10 (8) ◽  
pp. 1298 ◽  
Author(s):  
Lei Yin ◽  
Xiangjun Wang ◽  
Yubo Ni ◽  
Kai Zhou ◽  
Jilong Zhang

Multi-camera systems are widely used in the fields of airborne remote sensing and unmanned aerial vehicle imaging. The measurement precision of these systems depends on the accuracy of the extrinsic parameters. Therefore, it is important to accurately calibrate the extrinsic parameters between the onboard cameras. Unlike conventional multi-camera calibration methods with a common field of view (FOV), multi-camera calibration without overlapping FOVs has certain difficulties. In this paper, we propose a calibration method for a multi-camera system without common FOVs, which is used on aero photogrammetry. First, the extrinsic parameters of any two cameras in a multi-camera system is calibrated, and the extrinsic matrix is optimized by the re-projection error. Then, the extrinsic parameters of each camera are unified to the system reference coordinate system by using the global optimization method. A simulation experiment and a physical verification experiment are designed for the theoretical arithmetic. The experimental results show that this method is operable. The rotation error angle of the camera’s extrinsic parameters is less than 0.001rad and the translation error is less than 0.08 mm.


2021 ◽  
Vol 15 (03) ◽  
pp. 337-357
Author(s):  
Alexander Julian Golkowski ◽  
Marcus Handte ◽  
Peter Roch ◽  
Pedro J. Marrón

For many application areas such as autonomous navigation, the ability to accurately perceive the environment is essential. For this purpose, a wide variety of well-researched sensor systems are available that can be used to detect obstacles or navigation targets. Stereo cameras have emerged as a very versatile sensing technology in this regard due to their low hardware cost and high fidelity. Consequently, much work has been done to integrate them into mobile robots. However, the existing literature focuses on presenting the concepts and algorithms used to implement the desired robot functions on top of a given camera setup. As a result, the rationale and impact of choosing this camera setup are usually neither discussed nor described. Thus, when designing the stereo camera system for a mobile robot, there is not much general guidance beyond isolated setups that worked for a specific robot. To close the gap, this paper studies the impact of the physical setup of a stereo camera system in indoor environments. To do this, we present the results of an experimental analysis in which we use a given software setup to estimate the distance to an object while systematically changing the camera setup. Thereby, we vary the three main parameters of the physical camera setup, namely the angle and distance between the cameras as well as the field of view and a rather soft parameter, the resolution. Based on the results, we derive several guidelines on how to choose the parameters for an application.


Author(s):  
Chris Eddy ◽  
Christopher de Saxe ◽  
David Cebon

Heavy goods vehicles are overrepresented in cyclist fatality statistics in the United Kingdom relative to their proportion of total traffic volume. In particular, the statistics highlight a problem for vehicles turning left across the path of a cyclist on their inside. In this article, we present a camera-based system to detect and track cyclists in the blind spot. The system uses boosted classifiers and geometric constraints to detect cyclist wheels, and Canny edge detection to locate the ground contact point. The locations of these points are mapped into physical coordinates using a calibration system based on the ground plane. A Kalman Filter is used to track and predict the future motion of the cyclist. Full-scale tests were conducted using a construction vehicle fitted with two cameras, and the results compared with measurements from an ultrasonic-sensor system. Errors were comparable to the ultrasonic system, with average error standard deviation of 4.3 cm when the cyclist was 1.5 m from the heavy goods vehicles, and 7.1 cm at a distance of 1 m. When results were compared to manually extracted cyclist position data, errors were less than 4 cm at separations of 1.5 and 1 m. Compared to the ultrasonic system, the camera system requires simple hardware and can easily differentiate cyclists from stationary or moving background objects such as parked cars or roadside furniture. However, the cameras suffer from reduced robustness and accuracy at close range and cannot operate in low-light conditions.


Author(s):  
Michael Gruber ◽  
Bernhard Schachinger ◽  
Marc Muick ◽  
Christian Neuner ◽  
Helfried Tschemmernegg

We present details of the calibration and validation procedure of UltraCam Aerial Camera systems. Results from the laboratory calibration and from validation flights are presented for both, the large format nadir cameras and the oblique cameras as well. Thus in this contribution we show results from the UltraCam Eagle and the UltraCam Falcon, both nadir mapping cameras, and the UltraCam Osprey, our oblique camera system. This sensor offers a mapping grade nadir component together with the four oblique camera heads. The geometric processing after the flight mission is being covered by the UltraMap software product. Thus we present details about the workflow as well. The first part consists of the initial post-processing which combines image information as well as camera parameters derived from the laboratory calibration. The second part, the traditional automated aerial triangulation (AAT) is the step from single images to blocks and enables an additional optimization process. We also present some special features of our software, which are designed to better support the operator to analyze large blocks of aerial images and to judge the quality of the photogrammetric set-up.


2020 ◽  
Vol 61 (82) ◽  
pp. 127-138
Author(s):  
Scott Sorensen ◽  
Vinit Veerendraveer ◽  
Wayne Treible ◽  
Andrew R. Mahoney ◽  
Chandra Kambhamettu

AbstractThe Polar Sea Ice Topography REconstruction System, or PSITRES, is a 3D camera system designed to continuously monitor an area of ice and water adjacent to an ice-going vessel. Camera systems aboard ships in the polar regions are common; however, the application of computer vision techniques to extract high-level information from the imagery is infrequent. Many of the existing systems are built for human involvement throughout the process and lack automation necessary for round the clock use. The PSITRES was designed with computer vision in mind. It can capture images continuously for days on end with limited oversight. We have applied the system in different ice observing scenarios. The PSITRES was deployed on three research expeditions in the Arctic and Subarctic, and we present applications in measuring ice concentration, melt pond fraction and presence of algae. Systems like PSITRES and the computer vision algorithms applied represent steps toward automatically observing, evaluating and analyzing ice and the environment around ships in ice-covered waters.


2020 ◽  
Vol 12 (3) ◽  
pp. 394 ◽  
Author(s):  
Donatus Bapentire Angnuureng ◽  
Philip-Neri Jayson-Quashigah ◽  
Rafael Almar ◽  
Thomas Christian Stieglitz ◽  
Edward Jamal Anthony ◽  
...  

Video camera systems have been used over nearly three decades to monitor coastal dynamics. They facilitate a high-frequency analysis of spatiotemporal shoreline mobility. Video camera usage to measure beach intertidal profile evolution has not been standardized globally and the capacity to obtain accurate results requires authentication using various techniques. Applications are mostly site specific due to differences in installation. The present study examines the accuracy of intertidal topographic data derived from a video camera system compared to data acquired with unmanned aerial vehicle (UAV, or drone) surveys of a reflective beach. Using one year of 15-min video data and one year of monthly UAV observations, the intertidal profile shows a good agreement. Underestimations of intertidal profile elevations by the camera-based method are possibly linked to the camera view angle, rectification and gaps in data. The resolution of the video-derived intertidal topographic profiles confirmed, however, the suitability of the method in providing beach mobility surveys matching those required for a quantitative analysis of nearshore changes. Beach slopes were found to vary between 0.1 and 0.7, with a steep slope in May to July 2018 and a gentle slope in December 2018. Large but short-scale beach variations occurred between August 2018 and October 2018 and corresponded to relatively high wave events. In one year, this dynamic beach lost 7 m. At this rate, and as also observed at other beaches nearby, important coastal facilities and infrastructure will be prone to erosion. The data suggest that a low-cost shore-based camera, particularly when used in a network along the coast, can produce profile data for effective coastal management in West Africa and elsewhere.


Sign in / Sign up

Export Citation Format

Share Document