scholarly journals Parametric Point Cloud Slicing for Facade Retrofitting

Author(s):  
Oscar Gámez Bohórquez ◽  
William Derigent ◽  
Hind Bril El Haouzi

Current commitments by European governments seek to improve energy consumption as a means to reduce carbon emissions from building stock by 2050. Within such context, retrieving reliable three-dimensional contours from point clouds becomes an important step in developing facade retrofitting solutions since facade retrofitting projects often make use of as-built 3D models to help reduce inaccuracies by narrowing interpretation and measurement errors. This work aims to provide a method that uses topology-based parametric modelling for reconstructing building envelopes from point clouds. Through a semi-automated process that gives permanent visual feedback, the user adjusts parameters to custom standards of acceptability. A solution under the form of a Grasshopper definition delivers building envelope 3D contours in various file formats as a means for increasing interoperability. The main contributions of this work consist of a parametric reconstruction workflow capable of solving building topology for retrieving 3D contours, a strategy to bypass point cloud occlusion, and a strategy for converting those contours into an IFC model directly from the parametric modelling environment.

Author(s):  
B. Sirmacek ◽  
R. Lindenbergh

Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (&amp;sigma;) of roughness histograms are calculated as (μ<sub>1</sub> = 0.44 m., &amp;sigma;<sub>1</sub> = 0.071 m.) and (μ<sub>2</sub> = 0.025 m., &amp;sigma;<sub>2</sub> = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.


2021 ◽  
Vol 11 (13) ◽  
pp. 5941
Author(s):  
Mun-yong Lee ◽  
Sang-ha Lee ◽  
Kye-dong Jung ◽  
Seung-hyun Lee ◽  
Soon-chul Kwon

Computer-based data processing capabilities have evolved to handle a lot of information. As such, the complexity of three-dimensional (3D) models (e.g., animations or real-time voxels) containing large volumes of information has increased exponentially. This rapid increase in complexity has led to problems with recording and transmission. In this study, we propose a method of efficiently managing and compressing animation information stored in the 3D point-clouds sequence. A compressed point-cloud is created by reconfiguring the points based on their voxels. Compared with the original point-cloud, noise caused by errors is removed, and a preprocessing procedure that achieves high performance in a redundant processing algorithm is proposed. The results of experiments and rendering demonstrate an average file-size reduction of 40% using the proposed algorithm. Moreover, 13% of the over-lap data are extracted and removed, and the file size is further reduced.


2021 ◽  
Vol 11 (22) ◽  
pp. 10993
Author(s):  
Domenica Costantino ◽  
Gabriele Vozza ◽  
Vincenzo Saverio Alfio ◽  
Massimiliano Pepe

This paper presents a data-driven free-form modelling method dedicated to the parametric modelling of buildings with complex shapes located in particularly valuable Old Town Centres, using Airborne LiDAR Scanning (ALS) data and aerial imagery. The method aims to reconstruct and preserve the input point cloud based on the relative density of the data. The method is based on geometric operations, iterative transformations between point clouds, meshes, and shape identification. The method was applied on a few buildings located in the Old Town Centre of Bordeaux (France). The 3D model produced shows a mean distance to the point cloud of 0.058 m and a standard deviation of 0.664 m. In addition, the incidence of building footprint segmentation techniques in automatic and interactive model-driven modelling was investigated and, in order to identify the best approach, six different segmentation methods were tested. The segmentation was performed based on the footprints derived from Digital Surface Model (DSM), point cloud, nadir images, and OpenStreetMap (OSM). The comparison between the models shows that the segmentation that produces the most accurate and precise model is the interactive segmentation based on nadir images. This research also shows that in modelling complex structures, the model-driven method can achieve high levels of accuracy by including an interactive editing phase in building 3D models.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 884
Author(s):  
Chia-Ming Tsai ◽  
Yi-Horng Lai ◽  
Yung-Da Sun ◽  
Yu-Jen Chung ◽  
Jau-Woei Perng

Numerous sensors can obtain images or point cloud data on land, however, the rapid attenuation of electromagnetic signals and the lack of light in water have been observed to restrict sensing functions. This study expands the utilization of two- and three-dimensional detection technologies in underwater applications to detect abandoned tires. A three-dimensional acoustic sensor, the BV5000, is used in this study to collect underwater point cloud data. Some pre-processing steps are proposed to remove noise and the seabed from raw data. Point clouds are then processed to obtain two data types: a 2D image and a 3D point cloud. Deep learning methods with different dimensions are used to train the models. In the two-dimensional method, the point cloud is transferred into a bird’s eye view image. The Faster R-CNN and YOLOv3 network architectures are used to detect tires. Meanwhile, in the three-dimensional method, the point cloud associated with a tire is cut out from the raw data and is used as training data. The PointNet and PointConv network architectures are then used for tire classification. The results show that both approaches provide good accuracy.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 201
Author(s):  
Michael Bekele Maru ◽  
Donghwan Lee ◽  
Kassahun Demissie Tola ◽  
Seunghee Park

Modeling a structure in the virtual world using three-dimensional (3D) information enhances our understanding, while also aiding in the visualization, of how a structure reacts to any disturbance. Generally, 3D point clouds are used for determining structural behavioral changes. Light detection and ranging (LiDAR) is one of the crucial ways by which a 3D point cloud dataset can be generated. Additionally, 3D cameras are commonly used to develop a point cloud containing many points on the external surface of an object around it. The main objective of this study was to compare the performance of optical sensors, namely a depth camera (DC) and terrestrial laser scanner (TLS) in estimating structural deflection. We also utilized bilateral filtering techniques, which are commonly used in image processing, on the point cloud data for enhancing their accuracy and increasing the application prospects of these sensors in structure health monitoring. The results from these sensors were validated by comparing them with the outputs from a linear variable differential transformer sensor, which was mounted on the beam during an indoor experiment. The results showed that the datasets obtained from both the sensors were acceptable for nominal deflections of 3 mm and above because the error range was less than ±10%. However, the result obtained from the TLS were better than those obtained from the DC.


Author(s):  
J. Zhu ◽  
Y. Xu ◽  
L. Hoegner ◽  
U. Stilla

<p><strong>Abstract.</strong> In this work, we discussed how to directly combine thermal infrared image (TIR) and the point cloud without additional assistance from GCPs or 3D models. Specifically, we propose a point-based co-registration process for combining the TIR image and the point cloud for the buildings. The keypoints are extracted from images and point clouds via primitive segmentation and corner detection, then pairs of corresponding points are identified manually. After that, the estimated camera pose can be computed with EPnP algorithm. Finally, the point cloud with thermal information provided by IR images can be generated as a result, which is helpful in the tasks such as energy inspection, leakage detection, and abnormal condition monitoring. This paper provides us more insight about the probability and ideas about the combining TIR image and point cloud.</p>


Author(s):  
J. Boehm ◽  
K. Liu ◽  
C. Alis

In the geospatial domain we have now reached the point where data volumes we handle have clearly grown beyond the capacity of most desktop computers. This is particularly true in the area of point cloud processing. It is therefore naturally lucrative to explore established big data frameworks for big geospatial data. The very first hurdle is the import of geospatial data into big data frameworks, commonly referred to as data ingestion. Geospatial data is typically encoded in specialised binary file formats, which are not naturally supported by the existing big data frameworks. Instead such file formats are supported by software libraries that are restricted to single CPU execution. We present an approach that allows the use of existing point cloud file format libraries on the Apache Spark big data framework. We demonstrate the ingestion of large volumes of point cloud data into a compute cluster. The approach uses a map function to distribute the data ingestion across the nodes of a cluster. We test the capabilities of the proposed method to load billions of points into a commodity hardware compute cluster and we discuss the implications on scalability and performance. The performance is benchmarked against an existing native Apache Spark data import implementation.


Author(s):  
W. Ostrowski ◽  
M. Pilarska ◽  
J. Charyton ◽  
K. Bakuła

Creating 3D building models in large scale is becoming more popular and finds many applications. Nowadays, a wide term “3D building models” can be applied to several types of products: well-known CityGML solid models (available on few Levels of Detail), which are mainly generated from Airborne Laser Scanning (ALS) data, as well as 3D mesh models that can be created from both nadir and oblique aerial images. City authorities and national mapping agencies are interested in obtaining the 3D building models. Apart from the completeness of the models, the accuracy aspect is also important. Final accuracy of a building model depends on various factors (accuracy of the source data, complexity of the roof shapes, etc.). In this paper the methodology of inspection of dataset containing 3D models is presented. The proposed approach check all building in dataset with comparison to ALS point clouds testing both: accuracy and level of details. Using analysis of statistical parameters for normal heights for reference point cloud and tested planes and segmentation of point cloud provides the tool that can indicate which building and which roof plane in do not fulfill requirement of model accuracy and detail correctness. Proposed method was tested on two datasets: solid and mesh model.


Author(s):  
Y. Hori ◽  
T. Ogawa

The implementation of laser scanning in the field of archaeology provides us with an entirely new dimension in research and surveying. It allows us to digitally recreate individual objects, or entire cities, using millions of three-dimensional points grouped together in what is referred to as "point clouds". In addition, the visualization of the point cloud data, which can be used in the final report by archaeologists and architects, should usually be produced as a JPG or TIFF file. Not only the visualization of point cloud data, but also re-examination of older data and new survey of the construction of Roman building applying remote-sensing technology for precise and detailed measurements afford new information that may lead to revising drawings of ancient buildings which had been adduced as evidence without any consideration of a degree of accuracy, and finally can provide new research of ancient buildings. We used laser scanners at fields because of its speed, comprehensive coverage, accuracy and flexibility of data manipulation. Therefore, we “skipped” many of post-processing and focused on the images created from the meta-data simply aligned using a tool which extended automatic feature-matching algorithm and a popular renderer that can provide graphic results.


2018 ◽  
Vol 8 (2) ◽  
pp. 20170048 ◽  
Author(s):  
M. I. Disney ◽  
M. Boni Vicari ◽  
A. Burt ◽  
K. Calders ◽  
S. L. Lewis ◽  
...  

Terrestrial laser scanning (TLS) is providing exciting new ways to quantify tree and forest structure, particularly above-ground biomass (AGB). We show how TLS can address some of the key uncertainties and limitations of current approaches to estimating AGB based on empirical allometric scaling equations (ASEs) that underpin all large-scale estimates of AGB. TLS provides extremely detailed non-destructive measurements of tree form independent of tree size and shape. We show examples of three-dimensional (3D) TLS measurements from various tropical and temperate forests and describe how the resulting TLS point clouds can be used to produce quantitative 3D models of branch and trunk size, shape and distribution. These models can drastically improve estimates of AGB, provide new, improved large-scale ASEs, and deliver insights into a range of fundamental tree properties related to structure. Large quantities of detailed measurements of individual 3D tree structure also have the potential to open new and exciting avenues of research in areas where difficulties of measurement have until now prevented statistical approaches to detecting and understanding underlying patterns of scaling, form and function. We discuss these opportunities and some of the challenges that remain to be overcome to enable wider adoption of TLS methods.


Sign in / Sign up

Export Citation Format

Share Document