scholarly journals Multi-Sensor Extrinsic Calibration Using an Extended Set of Pairwise Geometric Transformations

Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6717
Author(s):  
Vitor Santos ◽  
Daniela Rato ◽  
Paulo Dias ◽  
Miguel Oliveira

Systems composed of multiple sensors for exteroceptive perception are becoming increasingly common, such as mobile robots or highly monitored spaces. However, to combine and fuse those sensors to create a larger and more robust representation of the perceived scene, the sensors need to be properly registered among them, that is, all relative geometric transformations must be known. This calibration procedure is challenging as, traditionally, human intervention is required in variate extents. This paper proposes a nearly automatic method where the best set of geometric transformations among any number of sensors is obtained by processing and combining the individual pairwise transformations obtained from an experimental method. Besides eliminating some experimental outliers with a standard criterion, the method exploits the possibility of obtaining better geometric transformations between all pairs of sensors by combining them within some restrictions to obtain a more precise transformation, and thus a better calibration. Although other data sources are possible, in this approach, 3D point clouds are obtained by each sensor, which correspond to the successive centers of a moving ball its field of view. The method can be applied to any sensors able to detect the ball and the 3D position of its center, namely, LIDARs, mono cameras (visual or infrared), stereo cameras, and TOF cameras. Results demonstrate that calibration is improved when compared to methods in previous works that do not address the outliers problem and, depending on the context, as explained in the results section, the multi-pairwise technique can be used in two different methodologies to reduce uncertainty in the calibration process.

Author(s):  
T. Guo ◽  
A. Capra ◽  
M. Troyer ◽  
A. Gruen ◽  
A. J. Brooks ◽  
...  

Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.


Author(s):  
J. Otepka ◽  
G. Mandlburger ◽  
W. Karel ◽  
B. Wöhrer ◽  
C. Ressl ◽  
...  

Abstract. Modern data acquisition with active or passive photogrammetric imaging techniques generally results in 3D point clouds. Depending on the acquisition or processing method, the spacing of the individual points is either uniform or irregular. In the latter case, the neighbourhood definition like for digital images (4- or 8-neighbourhood, etc.) cannot be applied. Instead, analysis requires a local point neighbourhood. The local point neighbourhood with conventional k-nearest neighbour or fixed distance searches often produce sub-optimal results suffering from the inhomogeneous point distribution. In this article, we generalize the neighbourhood definition and present a generic spatial search framework which explicitly deals with arbitrary point patterns and aims at optimizing local point selection for specific processing tasks like interpolation, surface normal estimation and point feature extraction, spatial segmentation, and such like. The framework provides atomic 2D and 3D search strategies, (i) k-nearest neighbour, (ii) region query, (iii) cell based selection, and (iv) quadrant/octant based selection. It allows to freely combine the individual strategies to form complex, conditional search queries as well as specifically tailored point sub-selection. The benefits of such a comprehensive neighbourhood search approach are showcased for feature extraction and surface interpolation of irregularly distributed points.


2014 ◽  
Vol 2014 ◽  
pp. 1-12
Author(s):  
Wei Song ◽  
Seoungjae Cho ◽  
Yulong Xi ◽  
Kyungeun Cho ◽  
Kyhyun Um

A mobile robot mounted with multiple sensors is used to rapidly collect 3D point clouds and video images so as to allow accurate terrain modeling. In this study, we develop a real-time terrain storage generation and representation system including a nonground point database (PDB), ground mesh database (MDB), and texture database (TDB). A voxel-based flag map is proposed for incrementally registering large-scale point clouds in a terrain model in real time. We quantize the 3D point clouds into 3D grids of the flag map as a comparative table in order to remove the redundant points. We integrate the large-scale 3D point clouds into a nonground PDB and a node-based terrain mesh using the CPU. Subsequently, we program a graphics processing unit (GPU) to generate the TDB by mapping the triangles in the terrain mesh onto the captured video images. Finally, we produce a nonground voxel map and a ground textured mesh as a terrain reconstruction result. Our proposed methods were tested in an outdoor environment. Our results show that the proposed system was able to rapidly generate terrain storage and provide high resolution terrain representation for mobile mapping services and a graphical user interface between remote operators and mobile robots.


Author(s):  
M. Hassanein ◽  
A. Moussa ◽  
N. El-Sheimy

In the last few years, multi-cameras and LIDAR systems draw the attention of the mapping community. They have been deployed on different mobile mapping platforms. The different uses of these platforms, especially the UAVs, offered new applications and developments which require fast and accurate results. The successful calibration of such systems is a key factor to achieve accurate results and for the successful processing of the system measurements especially with the different types of measurements provided by the LIDAR and the cameras. The system calibration aims to estimate the geometric relationships between the different system components. A number of applications require the systems be ready for operation in a short time especially for disasters monitoring applications. Also, many of the present system calibration techniques are constrained with the need of special arrangements in labs for the calibration procedures. In this paper, a new technique for calibration of integrated LIDAR and multi-cameras systems is presented. The new proposed technique offers a calibration solution that overcomes the need for special labs for standard calibration procedures. In the proposed technique, 3D reconstruction of automatically detected and matched image points is used to generate a sparse images-driven point cloud then, a registration between the LIDAR generated 3D point cloud and the images-driven 3D point takes place to estimate the geometric relationships between the cameras and the LIDAR.. In the presented technique a simple 3D artificial target is used to simplify the lab requirements for the calibration procedure. The used target is composed of three intersected plates. The choice of such target geometry was to ensure enough conditions for the convergence of registration between the constructed 3D point clouds from the two systems. The achieved results of the proposed approach prove its ability to provide an adequate and fully automated calibration without sophisticated calibration arrangement requirements. The proposed technique introduces high potential for system calibration for many applications especially those with critical logistic and time constraints such as in disaster monitoring applications.


Author(s):  
T. Guo ◽  
A. Capra ◽  
M. Troyer ◽  
A. Gruen ◽  
A. J. Brooks ◽  
...  

Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.


2020 ◽  
Vol 12 (7) ◽  
pp. 1137
Author(s):  
Balázs Nagy ◽  
Csaba Benedek

Sensor fusion is one of the main challenges in self driving and robotics applications. In this paper we propose an automatic, online and target-less camera-Lidar extrinsic calibration approach. We adopt a structure from motion (SfM) method to generate 3D point clouds from the camera data which can be matched to the Lidar point clouds; thus, we address the extrinsic calibration problem as a registration task in the 3D domain. The core step of the approach is a two-stage transformation estimation: First, we introduce an object level coarse alignment algorithm operating in the Hough space to transform the SfM-based and the Lidar point clouds into a common coordinate system. Thereafter, we apply a control point based nonrigid transformation refinement step to register the point clouds more precisely. Finally, we calculate the correspondences between the 3D Lidar points and the pixels in the 2D camera domain. We evaluated the method in various real-life traffic scenarios in Budapest, Hungary. The results show that our proposed extrinsic calibration approach is able to provide accurate and robust parameter settings on-the-fly.


Author(s):  
M. Hassanein ◽  
A. Moussa ◽  
N. El-Sheimy

In the last few years, multi-cameras and LIDAR systems draw the attention of the mapping community. They have been deployed on different mobile mapping platforms. The different uses of these platforms, especially the UAVs, offered new applications and developments which require fast and accurate results. The successful calibration of such systems is a key factor to achieve accurate results and for the successful processing of the system measurements especially with the different types of measurements provided by the LIDAR and the cameras. The system calibration aims to estimate the geometric relationships between the different system components. A number of applications require the systems be ready for operation in a short time especially for disasters monitoring applications. Also, many of the present system calibration techniques are constrained with the need of special arrangements in labs for the calibration procedures. In this paper, a new technique for calibration of integrated LIDAR and multi-cameras systems is presented. The new proposed technique offers a calibration solution that overcomes the need for special labs for standard calibration procedures. In the proposed technique, 3D reconstruction of automatically detected and matched image points is used to generate a sparse images-driven point cloud then, a registration between the LIDAR generated 3D point cloud and the images-driven 3D point takes place to estimate the geometric relationships between the cameras and the LIDAR.. In the presented technique a simple 3D artificial target is used to simplify the lab requirements for the calibration procedure. The used target is composed of three intersected plates. The choice of such target geometry was to ensure enough conditions for the convergence of registration between the constructed 3D point clouds from the two systems. The achieved results of the proposed approach prove its ability to provide an adequate and fully automated calibration without sophisticated calibration arrangement requirements. The proposed technique introduces high potential for system calibration for many applications especially those with critical logistic and time constraints such as in disaster monitoring applications.


Author(s):  
V. Walter ◽  
M. Kölle ◽  
D. Collmar ◽  
Y. Zhang

Abstract. In this article, we present a two-level approach for the crowd-based collection of vehicles from 3D point clouds. In the first level, the crowdworkers are asked to identify the coarse positions of vehicles in 2D rasterized shadings that were derived from the 3D point cloud. In order to increase the quality of the results, we utilize the wisdom of the crowd principle which says that averaging multiple estimates of a group of individuals provides an outcome that is often better than most of the underlying estimates or even better than the best estimate. For this, each crowd job is duplicated 10 times and the multiple results are integrated with a DBSCAN cluster algorithm. In the second level, we use the integrated results as pre-information for extracting small subsets of the 3D point cloud that are then presented to crowdworkers for approximating the included vehicle by means of a Minimum Bounding Box (MBB). Again, the crowd jobs are duplicated 10 times and an average bounding box is calculated from the individual bounding boxes. We will discuss the quality of the results of both steps and show that the wisdom of the crowd significantly improves the completeness as well as the geometric quality. With a tenfold acquisition, we have achieve a completeness of 93.3 percent and a geometric deviation of less than 1 m for 95 percent of the collected vehicles.


Author(s):  
P.-R. Hirt ◽  
L. Hoegner ◽  
U. Stilla

Abstract. In our daily lives, trees can be seen as the tallest and most noticeable representatives of the plant kingdom. Especially in urban areas, the individual tree is of high significance and responsible for a manifold of positive effects on the environment and residents. In the context of urban tree registers and thus monitoring of urban vegetation, we propose a general concept for the segmentation of trees from 3D point clouds. Mobile Laser Scanning (MLS) is introduced as the preferred sensor. Based on an analysis of earlier work in this field, we gather arguments and methods in order to involve segmentation in the bigger frame of a tree register workflow, including detailed modeling and change detection. Our concept for segmentation is based on a voxel-structure. In a first step, region growing approaches are used for ground removal and rough segmentation. Later, graph-based optimization will separate neighboring trees. For now, only the general concept can be introduced—quantitative analysis and optimization of the steps will follow in future work.


Sign in / Sign up

Export Citation Format

Share Document