scholarly journals CFNet: LiDAR-Camera Registration Using Calibration Flow Network

Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8112
Author(s):  
Xudong Lv ◽  
Shuo Wang ◽  
Dong Ye

As an essential procedure of data fusion, LiDAR-camera calibration is critical for autonomous vehicles and robot navigation. Most calibration methods require laborious manual work, complicated environmental settings, and specific calibration targets. The targetless methods are based on some complex optimization workflow, which is time-consuming and requires prior information. Convolutional neural networks (CNNs) can regress the six degrees of freedom (6-DOF) extrinsic parameters from raw LiDAR and image data. However, these CNN-based methods just learn the representations of the projected LiDAR and image and ignore the correspondences at different locations. The performances of these CNN-based methods are unsatisfactory and worse than those of non-CNN methods. In this paper, we propose a novel CNN-based LiDAR-camera extrinsic calibration algorithm named CFNet. We first decided that a correlation layer should be used to provide matching capabilities explicitly. Then, we innovatively defined calibration flow to illustrate the deviation of the initial projection from the ground truth. Instead of directly predicting the extrinsic parameters, we utilize CFNet to predict the calibration flow. The efficient Perspective-n-Point (EPnP) algorithm within the RANdom SAmple Consensus (RANSAC) scheme is applied to estimate the extrinsic parameters with 2D–3D correspondences constructed by the calibration flow. Due to its consideration of the geometric information, our proposed method performed better than the state-of-the-art CNN-based methods on the KITTI datasets. Furthermore, we also tested the flexibility of our approach on the KITTI360 datasets.

Author(s):  
Karl Ludwig Fetzer ◽  
Sergey G. Nersesov ◽  
Hashem Ashrafiuon

Abstract In this paper, the authors derive backstepping control laws for tracking a time-based reference trajectory for a 3D model of an autonomous vehicle with two degrees of underactuation. Tracking all six degrees of freedom is made possible by a transformation that reduces the order of the error dynamics. Stability of the resulting error dynamics is proven and demonstrated in simulations.


Author(s):  
Punarjay Chakravarty ◽  
Tom Roussel ◽  
Gaurav Pandey ◽  
Tinne Tuytelaars

Abstract We describe a Deep-Geometric Localizer that is able to estimate the full six degrees-of-freedom (DoF) global pose of the camera from a single image in a previously mapped environment. Our map is a topo-metric one, with discrete topological nodes whose 6DOF poses are known. Each topo-node in our map also comprises of a set of points, whose 2D features and 3D locations are stored as part of the mapping process. For the mapping phase, we utilise a stereo camera and a regular stereo visual SLAM pipeline. During the localization phase, we take a single camera image, localize it to a topological node using Deep Learning, and use a geometric algorithm (PnP) on the matched 2D features (and their 3D positions in the topo map) to determine the full 6DOF globally consistent pose of the camera. Our method divorces the mapping and the localization algorithms and sensors (stereo and mono), and allows accurate 6DOF pose estimation in a previously mapped environment using a single camera. With results in simulated and real environments, our hybrid algorithm is particularly useful for autonomous vehicles (AVs) and shuttles that might repeatedly traverse the same route.


2021 ◽  
Author(s):  
Evan Seitz ◽  
Peter Schwander ◽  
Francisco J. Acosta-Reyes ◽  
Suvrajit Maji ◽  
Joachim Frank

This work is based on the manifold-embedding approach to the study of biological molecules exhibiting conformational changes in a continuum. Previous studies established a workflow capable of reconstructing atomic-level structures in the conformational continuum from cryo-EM images so as to reveal the latent space of macromolecules undergoing multiple degrees of freedom. Here, we introduce a new approach that is informed by detailed heuristic analysis of manifolds formed by simulated heterogeneous cryo-EM datasets. These simulated models were generated with increasing complexity to account for multiple motions, state occupancies and CTF in a wide range of signal-to-noise ratios. Using these datasets as ground-truth, we provide detailed exposition of our findings using several conformational motions while exploring the available parameter space. Guided by these insights, we build a framework to leverage the high-dimensional geometric information obtained towards reconstituting the quasi-continuum of conformational states in the form of an energy landscape and respective 3D maps for all states therein. This framework offers substantial enhancements relative to previous work, for which a direct comparison of outputs has been provided.


Author(s):  
Toufik Al Khawli ◽  
Muddasar Anwar ◽  
Dongming Gan ◽  
Shafiqul Islam

This paper investigates the integration of laser profile sensor to an industrial robotic arm for automating the quality inspection in manufacturing processes that requires a manual labour intensive work. The aim was to register the measurements from a laser profile sensor mounted on a six degrees-of-freedom robot with respect to the robot base frame. The registration is based on a six degrees-of-freedom calibration, which is an essential step for several automated manufacturing processes that require high level of accuracy in tool positioning and alignment on one hand, and quality inspection systems that require flexibility and accurate measurements on the other hand. The investigation compromises of two calibration procedures namely the calibration using a sharp object and the planar constraints. The solution of the calibration procedures estimated from both iterative and optimization solvers is thoroughly discussed. By implementing a simulation platform that generates virtual data for the two procedures with additional levels of noise, the six-dimensional poses are estimated and compared to the ground truth. Finally, an experimental test using a laser profile from Acuity mounted on Mitsubishi RV-6SDL manipulator is presented to investigate the measurement accuracy with four estimated laser poses. The calibration procedure using a sharp object shows the most accurate simulation and experimental results under the effect of noise.


2019 ◽  
Vol 9 (11) ◽  
pp. 2238 ◽  
Author(s):  
Mario Claer ◽  
Alexander Ferrein ◽  
Stefan Schiffer

Perceiving its environment in 3D is an important ability for a modern robot. Today, this is often done using LiDARs which come with a strongly limited field of view (FOV), however. To extend their FOV, the sensors are mounted on driving vehicles in several different ways. This allows 3D perception even with 2D LiDARs if a corresponding localization system or technique is available. Another popular way to gain most information of the scanners is to mount them on a rotating carrier platform. In this way, their measurements in different directions can be collected and transformed into a common frame, in order to achieve a nearly full spherical perception. However, this is only possible if the kinetic chains of the platforms are known exactly, that is, if the LiDAR pose w.r.t. to its rotation center is well known. The manual measurement of these chains is often very cumbersome or sometimes even impossible to do with the necessary precision. Our paper proposes a method to calibrate the extrinsic LiDAR parameters by decoupling the rotation from the full six degrees of freedom transform and optimizing both separately. Thus, one error measure for the orientation and one for the translation with known orientation are minimized subsequently with a combination of a consecutive grid search and a gradient descent. Both error measures are inferred from spherical calibration targets. Our experiments with the method suggest that the main influences on the calibration results come from the the distance to the calibration targets, the accuracy of their center point estimation and the search grid resolution. However, our proposed calibration method improves the extrinsic parameters even with unfavourable configurations and from inaccurate initial pose guesses.


2017 ◽  
Vol 6 (2) ◽  
Author(s):  
Anko Börner ◽  
Dirk Baumbach ◽  
Maximilian Buder ◽  
Andre Choinowski ◽  
Ines Ernst ◽  
...  

AbstractEgo localization is an important prerequisite for several scientific, commercial, and statutory tasks. Only by knowing one’s own position, can guidance be provided, inspections be executed, and autonomous vehicles be operated. Localization becomes challenging if satellite-based navigation systems are not available, or data quality is not sufficient. To overcome this problem, a team of the German Aerospace Center (DLR) developed a multi-sensor system based on the human head and its navigation sensors – the eyes and the vestibular system. This system is called integrated positioning system (IPS) and contains a stereo camera and an inertial measurement unit for determining an ego pose in six degrees of freedom in a local coordinate system. IPS is able to operate in real time and can be applied for indoor and outdoor scenarios without any external reference or prior knowledge. In this paper, the system and its key hardware and software components are introduced. The main issues during the development of such complex multi-sensor measurement systems are identified and discussed, and the performance of this technology is demonstrated. The developer team started from scratch and transfers this technology into a commercial product right now. The paper finishes with an outlook.


2020 ◽  
pp. 67-73
Author(s):  
N.D. YUsubov ◽  
G.M. Abbasova

The accuracy of two-tool machining on automatic lathes is analyzed. Full-factor models of distortions and scattering fields of the performed dimensions, taking into account the flexibility of the technological system on six degrees of freedom, i. e. angular displacements in the technological system, were used in the research. Possibilities of design and control of two-tool adjustment are considered. Keywords turning processing, cutting mode, two-tool setup, full-factor model, accuracy, angular displacement, control, calculation [email protected]


2019 ◽  
Vol 11 (10) ◽  
pp. 1157 ◽  
Author(s):  
Jorge Fuentes-Pacheco ◽  
Juan Torres-Olivares ◽  
Edgar Roman-Rangel ◽  
Salvador Cervantes ◽  
Porfirio Juarez-Lopez ◽  
...  

Crop segmentation is an important task in Precision Agriculture, where the use of aerial robots with an on-board camera has contributed to the development of new solution alternatives. We address the problem of fig plant segmentation in top-view RGB (Red-Green-Blue) images of a crop grown under open-field difficult circumstances of complex lighting conditions and non-ideal crop maintenance practices defined by local farmers. We present a Convolutional Neural Network (CNN) with an encoder-decoder architecture that classifies each pixel as crop or non-crop using only raw colour images as input. Our approach achieves a mean accuracy of 93.85% despite the complexity of the background and a highly variable visual appearance of the leaves. We make available our CNN code to the research community, as well as the aerial image data set and a hand-made ground truth segmentation with pixel precision to facilitate the comparison among different algorithms.


Sign in / Sign up

Export Citation Format

Share Document