scholarly journals On-the-Fly Camera and Lidar Calibration

2020 ◽  
Vol 12 (7) ◽  
pp. 1137
Author(s):  
Balázs Nagy ◽  
Csaba Benedek

Sensor fusion is one of the main challenges in self driving and robotics applications. In this paper we propose an automatic, online and target-less camera-Lidar extrinsic calibration approach. We adopt a structure from motion (SfM) method to generate 3D point clouds from the camera data which can be matched to the Lidar point clouds; thus, we address the extrinsic calibration problem as a registration task in the 3D domain. The core step of the approach is a two-stage transformation estimation: First, we introduce an object level coarse alignment algorithm operating in the Hough space to transform the SfM-based and the Lidar point clouds into a common coordinate system. Thereafter, we apply a control point based nonrigid transformation refinement step to register the point clouds more precisely. Finally, we calculate the correspondences between the 3D Lidar points and the pixels in the 2D camera domain. We evaluated the method in various real-life traffic scenarios in Budapest, Hungary. The results show that our proposed extrinsic calibration approach is able to provide accurate and robust parameter settings on-the-fly.

Electronics ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 1205
Author(s):  
Zhiyu Wang ◽  
Li Wang ◽  
Bin Dai

Object detection in 3D point clouds is still a challenging task in autonomous driving. Due to the inherent occlusion and density changes of the point cloud, the data distribution of the same object will change dramatically. Especially, the incomplete data with sparsity or occlusion can not represent the complete characteristics of the object. In this paper, we proposed a novel strong–weak feature alignment algorithm between complete and incomplete objects for 3D object detection, which explores the correlations within the data. It is an end-to-end adaptive network that does not require additional data and can be easily applied to other object detection networks. Through a complete object feature extractor, we achieve a robust feature representation of the object. It serves as a guarding feature to help the incomplete object feature generator to generate effective features. The strong–weak feature alignment algorithm reduces the gap between different states of the same object and enhances the ability to represent the incomplete object. The proposed adaptation framework is validated on the KITTI object benchmark and gets about 6% improvement in detection average precision on 3D moderate difficulty compared to the basic model. The results show that our adaptation method improves the detection performance of incomplete 3D objects.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3313
Author(s):  
Łukasz Sobczak ◽  
Katarzyna Filus ◽  
Adam Domański ◽  
Joanna Domańska

With the emerging interest in the autonomous driving level at 4 and 5 comes a necessity to provide accurate and versatile frameworks to evaluate the algorithms used in autonomous vehicles. There is a clear gap in the field of autonomous driving simulators. It covers testing and parameter tuning of a key component of autonomous driving systems, SLAM, frameworks targeting off-road and safety-critical environments. It also includes taking into consideration the non-idealistic nature of the real-life sensors, associated phenomena and measurement errors. We created a LiDAR simulator that delivers accurate 3D point clouds in real time. The point clouds are generated based on the sensor placement and the LiDAR type that can be set using configurable parameters. We evaluate our solution based on comparison of the results using an actual device, Velodyne VLP-16, on real-life tracks and the corresponding simulations. We measure the error values obtained using Google Cartographer SLAM algorithm and the distance between the simulated and real point clouds to verify their accuracy. The results show that our simulation (which incorporates measurement errors and the rolling shutter effect) produces data that can successfully imitate the real-life point clouds. Due to dedicated mechanisms, it is compatible with the Robotic Operating System (ROS) and can be used interchangeably with data from actual sensors, which enables easy testing, SLAM algorithm parameter tuning and deployment.


Sensors ◽  
2019 ◽  
Vol 19 (2) ◽  
pp. 349 ◽  
Author(s):  
Hang Liu ◽  
Hengyu Li ◽  
Xiahua Liu ◽  
Jun Luo ◽  
Shaorong Xie ◽  
...  

This paper presents a novel method to estimate the relative poses between RGB-D cameras with minimal overlapping fields of view. This calibration problem is relevant to applications such as indoor 3D mapping and robot navigation that can benefit from a wider field of view using multiple RGB-D cameras. The proposed approach relies on descriptor-based patterns to provide well-matched 2D keypoints in the case of a minimal overlapping field of view between cameras. Integrating the matched 2D keypoints with corresponding depth values, a set of 3D matched keypoints are constructed to calibrate multiple RGB-D cameras. Experiments validated the accuracy and efficiency of the proposed calibration approach.


Author(s):  
H. Guo ◽  
K. Wang ◽  
W. Su ◽  
D. H. Zhu ◽  
W. L. Liu ◽  
...  

The shape of a live pig is an important indicator of its health and value, whether for breeding or for carcass quality. This paper implements a prototype system for live single pig body surface 3d scanning based on two consumer depth cameras, utilizing the 3d point clouds data. These cameras are calibrated in advance to have a common coordinate system. The live 3D point clouds stream of moving single pig is obtained by two Xtion Pro Live sensors from different viewpoints simultaneously. A novel detection method is proposed and applied to automatically detect the frames containing pigs with the correct posture from the point clouds stream, according to the geometric characteristics of pig’s shape. The proposed method is incorporated in a hybrid scheme, that serves as the preprocessing step in a body measurements framework for pigs. Experimental results show the portability of our scanning system and effectiveness of our detection method. Furthermore, an updated this point cloud preprocessing software for livestock body measurements can be downloaded freely from <a href="https://github.com/LiveStockShapeAnalysis"target="_blank">https://github.com/LiveStockShapeAnalysis</a> to livestock industry, research community and can be used for monitoring livestock growth status.


Author(s):  
Darius Popovas ◽  
Maria Chizhova ◽  
Denys Gorkovchuk ◽  
Julia Gorkovchuk ◽  
Mona Hess ◽  
...  

We are presenting a Terrestrial Laser Scanner simulator - a software device which could be a valuable educational tool for geomatics and engineering students. The main goal of the VirScan3D project is to cover engineering digitisation and will be solved through the development of a virtual system that allows users to create realistic data in the absence of a real measuring device in a modelled real life environment (digital twin). The prototype implementation of the virtual laser scanner is realised within a game engine, which allows for fast and easy 3D visualisation and navigation. Real life objects can be digitised, modelled and integrated into the simulator, thus creating a digital copy of a real world environment. Within this environment, the user can freely navigate and define suitable scanning positions/stations. At each scanning station a simulated scan is performed which is adapted to the technical specifications of a real scanner. The mathematical solution is based on 3D line intersection with the virtual 3D surface including noise and colour as well as an intensity simulation. As a result, 3D point clouds for each station are generated, which will be further processed for registration and modelling using standard software packages.


2014 ◽  
Vol 14 (2) ◽  
pp. 145-167 ◽  
Author(s):  
Yelda Turkan ◽  
Frédéric Bosché ◽  
Carl T. Haas ◽  
Ralph Haas

Purpose – Previous research has shown that “Scan-vs-BIM” object recognition systems, which fuse three dimensional (3D) point clouds from terrestrial laser scanning (TLS) or digital photogrammetry with 4D project building information models (BIM), provide valuable information for tracking construction works. However, until now, the potential of these systems has been demonstrated for tracking progress of permanent structural works only; no work has been reported yet on tracking secondary or temporary structures. For structural concrete work, temporary structures include formwork, scaffolding and shoring, while secondary components include rebar. Together, they constitute most of the earned value in concrete work. The impact of tracking secondary and temporary objects would thus be added veracity and detail to earned value calculations, and subsequently better project control and performance. The paper aims to discuss these issues. Design/methodology/approach – Two techniques for recognizing concrete construction secondary and temporary objects in TLS point clouds are implemented and tested using real-life data collected from a reinforced concrete building construction site. Both techniques represent significant innovative extensions of existing “Scan-vs-BIM” object recognition frameworks. Findings – The experimental results show that it is feasible to recognise secondary and temporary objects in TLS point clouds with good accuracy using the two novel techniques; but it is envisaged that superior results could be achieved by using additional cues such as colour and 3D edge information. Originality/value – This article makes valuable contributions to the problem of detecting and tracking secondary and temporary objects in 3D point clouds. The power of Scan-vs-BIM object recognition approaches to address this problem is demonstrated, but their limitations are also highlighted.


2021 ◽  
Vol 13 (16) ◽  
pp. 3220
Author(s):  
Yanling Zou ◽  
Holger Weinacker ◽  
Barbara Koch

An accurate understanding of urban objects is critical for urban modeling, intelligent infrastructure planning and city management. The semantic segmentation of light detection and ranging (LiDAR) point clouds is a fundamental approach for urban scene analysis. Over the last years, several methods have been developed to segment urban furniture with point clouds. However, the traditional processing of large amounts of spatial data has become increasingly costly, both time-wise and financially. Recently, deep learning (DL) techniques have been increasingly used for 3D segmentation tasks. Yet, most of these deep neural networks (DNNs) were conducted on benchmarks. It is, therefore, arguable whether DL approaches can achieve the state-of-the-art performance of 3D point clouds segmentation in real-life scenarios. In this research, we apply an adapted DNN (ARandLA-Net) to directly process large-scale point clouds. In particular, we develop a new paradigm for training and validation, which presents a typical urban scene in central Europe (Munzingen, Freiburg, Baden-Württemberg, Germany). Our dataset consists of nearly 390 million dense points acquired by Mobile Laser Scanning (MLS), which has a rather larger quantity of sample points in comparison to existing datasets and includes meaningful object categories that are particular to applications for smart cities and urban planning. We further assess the DNN on our dataset and investigate a number of key challenges from varying aspects, such as data preparation strategies, the advantage of color information and the unbalanced class distribution in the real world. The final segmentation model achieved a mean Intersection-over-Union (mIoU) score of 54.4% and an overall accuracy score of 83.9%. Our experiments indicated that different data preparation strategies influenced the model performance. Additional RGB information yielded an approximately 4% higher mIoU score. Our results also demonstrate that the use of weighted cross-entropy with inverse square root frequency loss led to better segmentation performance than when other losses were considered.


2013 ◽  
Vol 3 (1-2) ◽  
Author(s):  
Thuong Le-Tien ◽  
Marie Luong ◽  
Thai Phu Ho ◽  
Viet Dai Tran

One of depth cameras such as the Microsoft Kinect is much cheaper than conventional 3D scanning devices, thus it can be acquired for everyday users easily. However, the depth data captured by Kinect over a certain distance is of low quality. In this work, we implement a set of algorithms allowing users to capture 3D surfaces by using the handheld Kinect. As a classic alignment algorithm such as the Iterative Closest Point (ICP) does not show efficacy in aligning point clouds that have limited overlapped regions, another coarse alignment using the Sample Consensus Initial Alignment (SAC-IA) is incorporated in to the registration process in order to ameliorate 3D point clouds’ fitness. Two robust reconstruction methods namely the Alpha Shapes and the Grid Projection are also implemented to reconstruct 3D surface from registered point clouds. The experimental results have shown the efficiency and applicability of of our blueprint. The constructed system obtains acceptable results in a few minutes with a low price device, thus it may practically be an useful approach for avatar generations or online shoppings.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6717
Author(s):  
Vitor Santos ◽  
Daniela Rato ◽  
Paulo Dias ◽  
Miguel Oliveira

Systems composed of multiple sensors for exteroceptive perception are becoming increasingly common, such as mobile robots or highly monitored spaces. However, to combine and fuse those sensors to create a larger and more robust representation of the perceived scene, the sensors need to be properly registered among them, that is, all relative geometric transformations must be known. This calibration procedure is challenging as, traditionally, human intervention is required in variate extents. This paper proposes a nearly automatic method where the best set of geometric transformations among any number of sensors is obtained by processing and combining the individual pairwise transformations obtained from an experimental method. Besides eliminating some experimental outliers with a standard criterion, the method exploits the possibility of obtaining better geometric transformations between all pairs of sensors by combining them within some restrictions to obtain a more precise transformation, and thus a better calibration. Although other data sources are possible, in this approach, 3D point clouds are obtained by each sensor, which correspond to the successive centers of a moving ball its field of view. The method can be applied to any sensors able to detect the ball and the 3D position of its center, namely, LIDARs, mono cameras (visual or infrared), stereo cameras, and TOF cameras. Results demonstrate that calibration is improved when compared to methods in previous works that do not address the outliers problem and, depending on the context, as explained in the results section, the multi-pairwise technique can be used in two different methodologies to reduce uncertainty in the calibration process.


2014 ◽  
Vol 13 (1) ◽  
pp. 4127-4145
Author(s):  
Madhushi Verma ◽  
Mukul Gupta ◽  
Bijeeta Pal ◽  
Prof. K. K. Shukla

Orienteering problem (OP) is an NP-Hard graph problem. The nodes of the graph are associated with scores or rewards and the edges with time delays. The goal is to obtain a Hamiltonian path connecting the two necessary check points, i.e. the source and the target along with a set of control points such that the total collected score is maximized within a specified time limit. OP finds application in several fields like logistics, transportation networks, tourism industry, etc. Most of the existing algorithms for OP can only be applied on complete graphs that satisfy the triangle inequality. Real-life scenario does not guarantee that there exists a direct link between all control point pairs or the triangle inequality is satisfied. To provide a more practical solution, we propose a stochastic greedy algorithm (RWS_OP) that uses the roulette wheel selectionmethod, does not require that the triangle inequality condition is satisfied and is capable of handling both complete as well as incomplete graphs. Based on several experiments on standard benchmark data we show that RWS_OP is faster, more efficient in terms of time budget utilization and achieves a better performance in terms of the total collected score ascompared to a recently reported algorithm for incomplete graphs.


Sign in / Sign up

Export Citation Format

Share Document