scholarly journals Continuous-Time Laser Frames Associating and Mapping via Multilayer Optimization

Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 97
Author(s):  
Shaoxing Hu ◽  
Shen Xiao ◽  
Aiwu Zhang ◽  
Yiming Deng ◽  
Bingke Wang

To achieve the ability of associating continuous-time laser frames is of vital importance but challenging for hand-held or backpack simultaneous localization and mapping (SLAM). In this study, the complex associating and mapping problem is investigated and modeled as a multilayer optimization problem to realize low drift localization and point cloud map reconstruction without the assistance of the GNSS/INS navigation systems. 3D point clouds are aligned among consecutive frames, submaps, and closed-loop frames using the normal distributions transform (NDT) algorithm and the iterative closest point (ICP) algorithm. The ground points are extracted automatically, while the non-ground points are automatically segmented to different point clusters with some noise point clusters omitted before 3D point clouds are aligned. Through the three levels of interframe association, submap matching and closed-loop optimization, the continuous-time laser frames can be accurately associated to guarantee the consistency of 3D point cloud map. Finally, the proposed method was evaluated in different scenarios, the experimental results showed that the proposed method could not only achieve accurate mapping even in the complex scenes, but also successfully handle sparse laser frames well, which is critical for the scanners such as the new Velodyne VLP-16 scanner’s performance.

Author(s):  
Y. Yang ◽  
S. Song ◽  
C. Toth

Abstract. Place recognition or loop closure is a technique to recognize landmarks and/or scenes visited by a mobile sensing platform previously in an area. The technique is a key function for robustly practicing Simultaneous Localization and Mapping (SLAM) in any environment, including the global positioning system (GPS) denied environment by enabling to perform the global optimization to compensate the drift of dead-reckoning navigation systems. Place recognition in 3D point clouds is a challenging task which is traditionally handled with the aid of other sensors, such as camera and GPS. Unfortunately, visual place recognition techniques may be impacted by changes in illumination and texture, and GPS may perform poorly in urban areas. To mitigate this problem, state-of-art Convolutional Neural Networks (CNNs)-based 3D descriptors may be directly applied to 3D point clouds. In this work, we investigated the performance of different classification strategies utilizing a cutting-edge CNN-based 3D global descriptor (PointNetVLAD) for place recognition task on the Oxford RobotCar dataset.


Author(s):  
A. Nüchter ◽  
M. Bleier ◽  
J. Schauer ◽  
P. Janotta

This paper shows how to use the result of Google's SLAM solution, called Cartographer, to bootstrap our continuous-time SLAM algorithm. The presented approach optimizes the consistency of the global point cloud, and thus improves on Google’s results. We use the algorithms and data from Google as input for our continuous-time SLAM software. We also successfully applied our software to a similar backpack system which delivers consistent 3D point clouds even in absence of an IMU.


Author(s):  
Katashi Nagao ◽  
Menglong Yang ◽  
Yusuke Miyakawa

A method is presented that extends the real world into all buildings. This building-scale virtual reality (VR) method differs from augmented reality (AR) in that it uses automatically generated 3D point cloud maps of building interiors. It treats an entire indoor area a pose tracking area by using data collected using an RGB-D camera mounted on a VR headset and using deep learning to build a model from the data. It modifies the VR space in accordance with its intended usage by using segmentation and replacement of the 3D point clouds. This is difficult to do with AR but is essential if VR is to be used for actual real-world applications, such as disaster simulation including simulation of fires and flooding in buildings. 3D pose tracking in the building-scale VR is more accurate than conventional RGB-D simultaneous localization and mapping.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1228
Author(s):  
Ting On Chan ◽  
Linyuan Xia ◽  
Yimin Chen ◽  
Wei Lang ◽  
Tingting Chen ◽  
...  

Ancient pagodas are usually parts of hot tourist spots in many oriental countries due to their unique historical backgrounds. They are usually polygonal structures comprised by multiple floors, which are separated by eaves. In this paper, we propose a new method to investigate both the rotational and reflectional symmetry of such polygonal pagodas through developing novel geometric models to fit to the 3D point clouds obtained from photogrammetric reconstruction. The geometric model consists of multiple polygonal pyramid/prism models but has a common central axis. The method was verified by four datasets collected by an unmanned aerial vehicle (UAV) and a hand-held digital camera. The results indicate that the models fit accurately to the pagodas’ point clouds. The symmetry was realized by rotating and reflecting the pagodas’ point clouds after a complete leveling of the point cloud was achieved using the estimated central axes. The results show that there are RMSEs of 5.04 cm and 5.20 cm deviated from the perfect (theoretical) rotational and reflectional symmetries, respectively. This concludes that the examined pagodas are highly symmetric, both rotationally and reflectionally. The concept presented in the paper not only work for polygonal pagodas, but it can also be readily transformed and implemented for other applications for other pagoda-like objects such as transmission towers.


2021 ◽  
Vol 13 (15) ◽  
pp. 2868
Author(s):  
Yonglin Tian ◽  
Xiao Wang ◽  
Yu Shen ◽  
Zhongzheng Guo ◽  
Zilei Wang ◽  
...  

Three-dimensional information perception from point clouds is of vital importance for improving the ability of machines to understand the world, especially for autonomous driving and unmanned aerial vehicles. Data annotation for point clouds is one of the most challenging and costly tasks. In this paper, we propose a closed-loop and virtual–real interactive point cloud generation and model-upgrading framework called Parallel Point Clouds (PPCs). To our best knowledge, this is the first time that the training model has been changed from an open-loop to a closed-loop mechanism. The feedback from the evaluation results is used to update the training dataset, benefiting from the flexibility of artificial scenes. Under the framework, a point-based LiDAR simulation model is proposed, which greatly simplifies the scanning operation. Besides, a group-based placing method is put forward to integrate hybrid point clouds, via locating candidate positions for virtual objects in real scenes. Taking advantage of the CAD models and mobile LiDAR devices, two hybrid point cloud datasets, i.e., ShapeKITTI and MobilePointClouds, are built for 3D detection tasks. With almost zero labor cost on data annotation for newly added objects, the models (PointPillars) trained with ShapeKITTI and MobilePointClouds achieved 78.6% and 60.0% of the average precision of the model trained with real data on 3D detection, respectively.


Geosciences ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 75
Author(s):  
Dario Carrea ◽  
Antonio Abellan ◽  
Marc-Henri Derron ◽  
Neal Gauvin ◽  
Michel Jaboyedoff

The use of 3D point clouds to improve the understanding of natural phenomena is currently applied in natural hazard investigations, including the quantification of rockfall activity. However, 3D point cloud treatment is typically accomplished using nondedicated (and not optimal) software. To fill this gap, we present an open-source, specific rockfall package in an object-oriented toolbox developed in the MATLAB® environment. The proposed package offers a complete and semiautomatic 3D solution that spans from extraction to identification and volume estimations of rockfall sources using state-of-the-art methods and newly implemented algorithms. To illustrate the capabilities of this package, we acquired a series of high-quality point clouds in a pilot study area referred to as the La Cornalle cliff (West Switzerland), obtained robust volume estimations at different volumetric scales, and derived rockfall magnitude–frequency distributions, which assisted in the assessment of rockfall activity and long-term erosion rates. An outcome of the case study shows the influence of the volume computation on the magnitude–frequency distribution and ensuing erosion process interpretation.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 201
Author(s):  
Michael Bekele Maru ◽  
Donghwan Lee ◽  
Kassahun Demissie Tola ◽  
Seunghee Park

Modeling a structure in the virtual world using three-dimensional (3D) information enhances our understanding, while also aiding in the visualization, of how a structure reacts to any disturbance. Generally, 3D point clouds are used for determining structural behavioral changes. Light detection and ranging (LiDAR) is one of the crucial ways by which a 3D point cloud dataset can be generated. Additionally, 3D cameras are commonly used to develop a point cloud containing many points on the external surface of an object around it. The main objective of this study was to compare the performance of optical sensors, namely a depth camera (DC) and terrestrial laser scanner (TLS) in estimating structural deflection. We also utilized bilateral filtering techniques, which are commonly used in image processing, on the point cloud data for enhancing their accuracy and increasing the application prospects of these sensors in structure health monitoring. The results from these sensors were validated by comparing them with the outputs from a linear variable differential transformer sensor, which was mounted on the beam during an indoor experiment. The results showed that the datasets obtained from both the sensors were acceptable for nominal deflections of 3 mm and above because the error range was less than ±10%. However, the result obtained from the TLS were better than those obtained from the DC.


Aerospace ◽  
2018 ◽  
Vol 5 (3) ◽  
pp. 94 ◽  
Author(s):  
Hriday Bavle ◽  
Jose Sanchez-Lopez ◽  
Paloma Puente ◽  
Alejandro Rodriguez-Ramos ◽  
Carlos Sampedro ◽  
...  

This paper presents a fast and robust approach for estimating the flight altitude of multirotor Unmanned Aerial Vehicles (UAVs) using 3D point cloud sensors in cluttered, unstructured, and dynamic indoor environments. The objective is to present a flight altitude estimation algorithm, replacing the conventional sensors such as laser altimeters, barometers, or accelerometers, which have several limitations when used individually. Our proposed algorithm includes two stages: in the first stage, a fast clustering of the measured 3D point cloud data is performed, along with the segmentation of the clustered data into horizontal planes. In the second stage, these segmented horizontal planes are mapped based on the vertical distance with respect to the point cloud sensor frame of reference, in order to provide a robust flight altitude estimation even in presence of several static as well as dynamic ground obstacles. We validate our approach using the IROS 2011 Kinect dataset available in the literature, estimating the altitude of the RGB-D camera using the provided 3D point clouds. We further validate our approach using a point cloud sensor on board a UAV, by means of several autonomous real flights, closing its altitude control loop using the flight altitude estimated by our proposed method, in presence of several different static as well as dynamic ground obstacles. In addition, the implementation of our approach has been integrated in our open-source software framework for aerial robotics called Aerostack.


Author(s):  
Uzair Nadeem ◽  
Mohammad A. A. K. Jalwana ◽  
Mohammed Bennamoun ◽  
Roberto Togneri ◽  
Ferdous Sohel

Sign in / Sign up

Export Citation Format

Share Document