scholarly journals FIRST STEPS TO AUTOMATED INTERIOR RECONSTRUCTION FROM SEMANTICALLY ENRICHED POINT CLOUDS AND IMAGERY

Author(s):  
L. S. Obrock ◽  
E. Gülch

The automated generation of a BIM-Model from sensor data is a huge challenge for the modeling of existing buildings. Currently the measurements and analyses are time consuming, allow little automation and require expensive equipment. We do lack an automated acquisition of semantical information of objects in a building.<br> We are presenting first results of our approach based on imagery and derived products aiming at a more automated modeling of interior for a BIM building model. We examine the building parts and objects visible in the collected images using Deep Learning Methods based on Convolutional Neural Networks. For localization and classification of building parts we apply the FCN8s-Model for pixel-wise Semantic Segmentation. We, so far, reach a Pixel Accuracy of 77.2&amp;thinsp;% and a mean Intersection over Union of 44.2&amp;thinsp;%. We finally use the network for further reasoning on the images of the interior room. We combine the segmented images with the original images and use photogrammetric methods to produce a three-dimensional point cloud. We code the extracted object types as colours of the 3D-points. We thus are able to uniquely classify the points in three-dimensional space. We preliminary investigate a simple extraction method for colour and material of building parts. It is shown, that the combined images are very well suited to further extract more semantic information for the BIM-Model. With the presented methods we see a sound basis for further automation of acquisition and modeling of semantic and geometric information of interior rooms for a BIM-Model.

2021 ◽  
Author(s):  
Simone Müller ◽  
Dieter Kranzlmüller

Based on depth perception of individual stereo cameras, spatial structures can be derived as point clouds. The quality of such three-dimensional data is technically restricted by sensor limitations, latency of recording, and insufficient object reconstructions caused by surface illustration. Additionally external physical effects like lighting conditions, material properties, and reflections can lead to deviations between real and virtual object perception. Such physical influences can be seen in rendered point clouds as geometrical imaging errors on surfaces and edges. We propose the simultaneous use of multiple and dynamically arranged cameras. The increased information density leads to more details in surrounding detection and object illustration. During a pre-processing phase the collected data are merged and prepared. Subsequently, a logical analysis part examines and allocates the captured images to three-dimensional space. For this purpose, it is necessary to create a new metadata set consisting of image and localisation data. The post-processing reworks and matches the locally assigned images. As a result, the dynamic moving images become comparable so that a more accurate point cloud can be generated. For evaluation and better comparability we decided to use synthetically generated data sets. Our approach builds the foundation for dynamic and real-time based generation of digital twins with the aid of real sensor data.


2019 ◽  
Vol 8 (5) ◽  
pp. 213 ◽  
Author(s):  
Florent Poux ◽  
Roland Billen

Automation in point cloud data processing is central in knowledge discovery within decision-making systems. The definition of relevant features is often key for segmentation and classification, with automated workflows presenting the main challenges. In this paper, we propose a voxel-based feature engineering that better characterize point clusters and provide strong support to supervised or unsupervised classification. We provide different feature generalization levels to permit interoperable frameworks. First, we recommend a shape-based feature set (SF1) that only leverages the raw X, Y, Z attributes of any point cloud. Afterwards, we derive relationship and topology between voxel entities to obtain a three-dimensional (3D) structural connectivity feature set (SF2). Finally, we provide a knowledge-based decision tree to permit infrastructure-related classification. We study SF1/SF2 synergy on a new semantic segmentation framework for the constitution of a higher semantic representation of point clouds in relevant clusters. Finally, we benchmark the approach against novel and best-performing deep-learning methods while using the full S3DIS dataset. We highlight good performances, easy-integration, and high F1-score (> 85%) for planar-dominant classes that are comparable to state-of-the-art deep learning.


2018 ◽  
Vol 25 (2) ◽  
pp. 47-56 ◽  
Author(s):  
Marek Kulawiak ◽  
Zbigniew Łubniewski

Abstract The technologies of sonar and laser scanning are an efficient and widely used source of spatial information with regards to underwater and over ground environment respectively. The measurement data are usually available in the form of groups of separate points located irregularly in three-dimensional space, known as point clouds. This data model has known disadvantages, therefore in many applications a different form of representation, i.e. 3D surfaces composed of edges and facets, is preferred with respect to the terrain or seabed surface relief as well as various objects shape. In the paper, the authors propose a new approach to 3D shape reconstruction from both multibeam and LiDAR measurements. It is based on a multiple-step and to some extent adaptive process, in which the chosen set and sequence of particular stages may depend on a current type and characteristic features of the processed data. The processing scheme includes: 1) pre-processing which may include noise reduction, rasterization and pre-classification, 2) detection and separation of objects for dedicated processing (e.g. steep walls, masts), and 3) surface reconstruction in 3D by point cloud triangulation and with the aid of several dedicated procedures. The benefits of using the proposed methods, including algorithms for detecting various features and improving the regularity of the data structure, are presented and discussed. Several different shape reconstruction algorithms were tested in combination with the proposed data processing methods and the strengths and weaknesses of each algorithm were highlighted.


Author(s):  
E. Nocerino ◽  
F. Lago ◽  
D. Morabito ◽  
F. Remondino ◽  
L. Porzi ◽  
...  

During the last two decades we have witnessed great improvements in ICT hardware and software technologies. Three-dimensional content is starting to become commonplace now in many applications. Although for many years 3D technologies have been used in the generation of assets by researchers and experts, nowadays these tools are starting to become commercially available to every citizen. This is especially the case for smartphones, that are powerful enough and sufficiently widespread to perform a huge variety of activities (e.g. paying, calling, communication, photography, navigation, localization, etc.), including just very recently the possibility of running 3D reconstruction pipelines. The REPLICATE project is tackling this particular issue, and it has an ambitious vision to enable ubiquitous 3D creativity via the development of tools for mobile 3D-assets generation on smartphones/tablets. This article presents the REPLICATE project’s concept and some of the ongoing activities, with particular attention being paid to advances made in the first year of work. Thus the article focuses on the system architecture definition, selection of optimal frames for 3D cloud reconstruction, automated generation of sparse and dense point clouds, mesh modelling techniques and post-processing actions. Experiments so far were concentrated on indoor objects and some simple heritage artefacts, however, in the long term we will be targeting a larger variety of scenarios and communities.


Author(s):  
M. Soilán ◽  
A. Nóvoa ◽  
A. Sánchez-Rodríguez ◽  
B. Riveiro ◽  
P. Arias

Abstract. Transport infrastructure monitoring has lately attracted increasing attention due to the rise in extreme natural hazards posed by climate change. Mobile Mapping Systems gather information regarding the state of the assets, which allows for more efficient decision-making. These systems provide information in the form of three-dimensional point clouds. Point cloud analysis through deep learning has emerged as a focal research area due to its wide application in areas such as autonomous driving. This paper aims to apply the pioneering PointNet, and the current state-of-the-art KPConv architectures to perform scene segmentation of railway tunnels, in order to validate their employability over heuristic classification methods. The approach is to perform a multi-class classification that classifies the most relevant components of tunnels: ground, lining, wiring and rails. Both architectures are trained from scratch with heuristically classified point clouds of two different railway tunnels. Results show that, while both architectures are suitable for the proposed classification task, KPConv outperforms PointNet with F1-scores over 97% for ground, lining and wiring classes, and over 90% for rails. In addition, KPConv is tested using transfer learning, which gives F1-scores slightly lower than for the model training from scratch but shows better generalization capabilities.


Mathematics ◽  
2021 ◽  
Vol 9 (20) ◽  
pp. 2589
Author(s):  
Artyom Makovetskii ◽  
Sergei Voronin ◽  
Vitaly Kober ◽  
Aleksei Voronin

The registration of point clouds in a three-dimensional space is an important task in many areas of computer vision, including robotics and autonomous driving. The purpose of registration is to find a rigid geometric transformation to align two point clouds. The registration problem can be affected by noise and partiality (two point clouds only have a partial overlap). The Iterative Closed Point (ICP) algorithm is a common method for solving the registration problem. Recently, artificial neural networks have begun to be used in the registration of point clouds. The drawback of ICP and other registration algorithms is the possible convergence to a local minimum. Thus, an important characteristic of a registration algorithm is the ability to avoid local minima. In this paper, we propose an ICP-type registration algorithm (λ-ICP) that uses a multiparameter functional (λ-functional). The proposed λ-ICP algorithm generalizes the NICP algorithm (normal ICP). The application of the λ-functional requires a consistent choice of the eigenvectors of the covariance matrix of two point clouds. The paper also proposes an algorithm for choosing the directions of eigenvectors. The performance of the proposed λ-ICP algorithm is compared with that of a standard point-to-point ICP and neural network Deep Closest Points (DCP).


Author(s):  
Philipp-Roman Hirt ◽  
Yusheng Xu ◽  
Ludwig Hoegner ◽  
Uwe Stilla

AbstractTrees play an important role in the complex system of urban environments. Their benefits to environment and health are manifold. Yet, especially near streets, the traffic can be impaired by a limited clearance. Even injuries could be caused by breaking tree parts. Hence, it is important to capture the trees in the frame of a tree cadastre and to ensure regular monitoring. Mobile laser scanning (MLS) can be used for data acquisition, followed by an automated analysis of the point clouds acquired over time. The presented approach uses occupancy grids with a grid size of 10 cm, which enable the comparison of several epochs in three-dimensional space. Prior to that, a segmentation of single tree objects is conducted. After cylinder-based trunk localisation, closely neighboured tree crowns are separated using weights derived from local point densities. Therefore, changes for every single tree can be derived with regard to its parameters and its point cloud. The testing area is set along an urban street in Munich, Germany, using the publicly available benchmark data sets TUM-MLS-2016/2018. In the frame of the evaluation, tree objects are geo-referenced and mapped in 2D. The tree parameters height and diameter at breast height are derived. The geometric evaluation of the change analysis facilitates not only the acquisition of stock changes, but also the detection of shape changes for the tree objects.


2021 ◽  
Vol 15 (03) ◽  
pp. 293-312
Author(s):  
Fabian Duerr ◽  
Hendrik Weigel ◽  
Jürgen Beyerer

One of the key tasks for autonomous vehicles or robots is a robust perception of their 3D environment, which is why autonomous vehicles or robots are equipped with a wide range of different sensors. Building upon a robust sensor setup, understanding and interpreting their 3D environment is the next important step. Semantic segmentation of 3D sensor data, e.g. point clouds, provides valuable information for this task and is often seen as key enabler for 3D scene understanding. This work presents an iterative deep fusion architecture for semantic segmentation of 3D point clouds, which builds upon a range image representation of the point clouds and additionally exploits camera features to increase accuracy and robustness. In contrast to other approaches, which fuse lidar and camera features once, the proposed fusion strategy iteratively combines and refines lidar and camera features at different scales inside the network architecture. Additionally, the proposed approach can deal with camera failure as well as jointly predict lidar and camera segmentation. We demonstrate the benefits of the presented iterative deep fusion approach on two challenging datasets, outperforming all range image-based lidar and fusion approaches. An in-depth evaluation underlines the effectiveness of the proposed fusion strategy and the potential of camera features for 3D semantic segmentation.


Author(s):  
E. S. Malinverni ◽  
R. Pierdicca ◽  
M. Paolanti ◽  
M. Martini ◽  
C. Morbidoni ◽  
...  

<p><strong>Abstract.</strong> Cultural Heritage is a testimony of past human activity, and, as such, its objects exhibit great variety in their nature, size and complexity; from small artefacts and museum items to cultural landscapes, from historical building and ancient monuments to city centers and archaeological sites. Cultural Heritage around the globe suffers from wars, natural disasters and human negligence. The importance of digital documentation is well recognized and there is an increasing pressure to document our heritage both nationally and internationally. For this reason, the three-dimensional scanning and modeling of sites and artifacts of cultural heritage have remarkably increased in recent years. The semantic segmentation of point clouds is an essential step of the entire pipeline; in fact, it allows to decompose complex architectures in single elements, which are then enriched with meaningful information within Building Information Modelling software. Notwithstanding, this step is very time consuming and completely entrusted on the manual work of domain experts, far from being automatized. This work describes a method to label and cluster automatically a point cloud based on a supervised Deep Learning approach, using a state-of-the-art Neural Network called PointNet++. Despite other methods are known, we have choose PointNet++ as it reached significant results for classifying and segmenting 3D point clouds. PointNet++ has been tested and improved, by training the network with annotated point clouds coming from a real survey and to evaluate how performance changes according to the input training data. It can result of great interest for the research community dealing with the point cloud semantic segmentation, since it makes public a labelled dataset of CH elements for further tests.</p>


Sign in / Sign up

Export Citation Format

Share Document