Automatic Generation of Labeled 3D Point Clouds of Natural Environments with Gazebo

Author(s):  
Manuel Sanchez ◽  
Jorge L. Martinez ◽  
Jesus Morales ◽  
Alfredo Robles ◽  
Mariano Moran
2020 ◽  
Vol 10 (3) ◽  
pp. 1140 ◽  
Author(s):  
Jorge L. Martínez ◽  
Mariano Morán ◽  
Jesús Morales ◽  
Alfredo Robles ◽  
Manuel Sánchez

Autonomous navigation of ground vehicles on natural environments requires looking for traversable terrain continuously. This paper develops traversability classifiers for the three-dimensional (3D) point clouds acquired by the mobile robot Andabata on non-slippery solid ground. To this end, different supervised learning techniques from the Python library Scikit-learn are employed. Training and validation are performed with synthetic 3D laser scans that were labelled point by point automatically with the robotic simulator Gazebo. Good prediction results are obtained for most of the developed classifiers, which have also been tested successfully on real 3D laser scans acquired by Andabata in motion.


Author(s):  
Andreas Kuhn ◽  
Hai Huang ◽  
Martin Drauschke ◽  
Helmut Mayer

High resolution consumer cameras on Unmanned Aerial Vehicles (UAVs) allow for cheap acquisition of highly detailed images, e.g., of urban regions. Via image registration by means of Structure from Motion (SfM) and Multi View Stereo (MVS) the automatic generation of huge amounts of 3D points with a relative accuracy in the centimeter range is possible. Applications such as semantic classification have a need for accurate 3D point clouds, but do not benefit from an extremely high resolution/density. In this paper, we, therefore, propose a fast fusion of high resolution 3D point clouds based on occupancy grids. The result is used for semantic classification. In contrast to state-of-the-art classification methods, we accept a certain percentage of outliers, arguing that they can be considered in the classification process when a per point belief is determined in the fusion process. To this end, we employ an octree-based fusion which allows for the derivation of outlier probabilities. The probabilities give a belief for every 3D point, which is essential for the semantic classification to consider measurement noise. For an example point cloud with half a billion 3D points (cf. Figure 1), we show that our method can reduce runtime as well as improve classification accuracy and offers high scalability for large datasets.


Author(s):  
W. Nguatem ◽  
M. Drauschke ◽  
H. Mayer

We present a workflow for the automatic generation of building models with levels of detail (LOD) 1 to 3 according to the CityGML standard (Gröger et al., 2012). We start with orienting unsorted image sets employing (Mayer et al., 2012), we compute depth maps using semi-global matching (SGM) (Hirschmüller, 2008), and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014). Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013) and window model selection (Nguatem et al., 2014). We demonstrate our workflow up to the export into CityGML.


2020 ◽  
Vol 9 (9) ◽  
pp. 521
Author(s):  
Gilles-Antoine Nys ◽  
Florent Poux ◽  
Roland Billen

The relevant insights provided by 3D City models greatly improve Smart Cities and their management policies. In the urban built environment, buildings frequently represent the most studied and modeled features. CityJSON format proposes a lightweight and developer-friendly alternative to CityGML. This paper proposes an improvement to the usability of 3D models providing an automatic generation method in CityJSON, to ensure compactness, expressivity, and interoperability. In addition to a compliance rate in excess of 92% for geometry and topology, the generated model allows the handling of contextual information, such as metadata and refined levels of details (LoD), in a built-in manner. By breaking down the building-generation process, it creates consistent building objects from the unique source of Light Detection and Ranging (LiDAR) point clouds.


Author(s):  
Andreas Kuhn ◽  
Hai Huang ◽  
Martin Drauschke ◽  
Helmut Mayer

High resolution consumer cameras on Unmanned Aerial Vehicles (UAVs) allow for cheap acquisition of highly detailed images, e.g., of urban regions. Via image registration by means of Structure from Motion (SfM) and Multi View Stereo (MVS) the automatic generation of huge amounts of 3D points with a relative accuracy in the centimeter range is possible. Applications such as semantic classification have a need for accurate 3D point clouds, but do not benefit from an extremely high resolution/density. In this paper, we, therefore, propose a fast fusion of high resolution 3D point clouds based on occupancy grids. The result is used for semantic classification. In contrast to state-of-the-art classification methods, we accept a certain percentage of outliers, arguing that they can be considered in the classification process when a per point belief is determined in the fusion process. To this end, we employ an octree-based fusion which allows for the derivation of outlier probabilities. The probabilities give a belief for every 3D point, which is essential for the semantic classification to consider measurement noise. For an example point cloud with half a billion 3D points (cf. Figure 1), we show that our method can reduce runtime as well as improve classification accuracy and offers high scalability for large datasets.


Author(s):  
L. S. Runceanu ◽  
N. Haala

<p><strong>Abstract.</strong> This work addresses the automatic reconstruction of objects useful for BIM, like walls, floors and ceilings, from meshed and textured mapped 3D point clouds of indoor scenes. For this reason, we focus on the semantic segmentation of 3D indoor meshes as the initial step for the automatic generation of BIM models. Our investigations are based on the benchmark dataset ScanNet, which aims at the interpretation of 3D indoor scenes. For this purpose it provides 3D meshed representations as collected from low cost range cameras. In our opinion such RGB-D data has a great potential for the automated reconstruction of BIM objects.</p>


Author(s):  
A. Cefalu ◽  
N. Haala ◽  
S. Schmohl ◽  
I. Neumann ◽  
T. Genz

Photogrammetric data capture of complex 3D objects using UAV imagery has become commonplace. Software tools based on algorithms like Structure-from-Motion and multi-view stereo image matching enable the fully automatic generation of densely meshed 3D point clouds. In contrast, the planning of a suitable image network usually requires considerable effort of a human expert, since this step directly influences the precision and completeness of the resulting point cloud. Planning of suitable camera stations can be rather complex, in particular for objects like buildings, bridges and monuments, which frequently feature strong depth variations to be acquired by high resolution images at a short distance. Within the paper, we present an automatic flight mission planning tool, which generates flight lines while aiming at camera configurations, which maintain a roughly constant object distance, provide sufficient image overlap and avoid unnecessary stations. Planning is based on a coarse Digital Surface Model and an approximate building outline. As a proof of concept, we use the tool within our research project MoVEQuaD, which aims at the reconstruction of building geometry at sub-centimetre accuracy.


Author(s):  
W. Nguatem ◽  
M. Drauschke ◽  
H. Mayer

We present a workflow for the automatic generation of building models with levels of detail (LOD) 1 to 3 according to the CityGML standard (Gröger et al., 2012). We start with orienting unsorted image sets employing (Mayer et al., 2012), we compute depth maps using semi-global matching (SGM) (Hirschmüller, 2008), and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014). Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013) and window model selection (Nguatem et al., 2014). We demonstrate our workflow up to the export into CityGML.


Sign in / Sign up

Export Citation Format

Share Document