scholarly journals Geometry Extraction for High Resolution Building Energy Modelling Applications from Point Cloud Data: A Case Study of a Factory Facility

2017 ◽  
Vol 142 ◽  
pp. 1805-1810 ◽  
Author(s):  
Tom Lloyd Garwood ◽  
Ben Richard Hughes ◽  
Dominic O’Connor ◽  
John K Calautit ◽  
Michael R Oates ◽  
...  
Author(s):  
Romina Dastoorian ◽  
Ahmad E. Elhabashy ◽  
Wenmeng Tian ◽  
Lee J. Wells ◽  
Jaime A. Camelio

With the latest advancements in three-dimensional (3D) measurement technologies, obtaining 3D point cloud data for inspection purposes in manufacturing is becoming more common. While 3D point cloud data allows for better inspection capabilities, their analysis is typically challenging. Especially with unstructured 3D point cloud data, containing coordinates at random locations, the challenges increase with higher levels of noise and larger volumes of data. Hence, the objective of this paper is to extend the previously developed Adaptive Generalized Likelihood Ratio (AGLR) approach to handle unstructured 3D point cloud data used for automated surface defect inspection in manufacturing. More specifically, the AGLR approach was implemented in a practical case study to inspect twenty-seven samples, each with a unique fault. These faults were designed to cover an array of possible faults having three different sizes, three different magnitudes, and located in three different locations. The results show that the AGLR approach can indeed differentiate between non-faulty and a varying range of faulty surfaces while being able to pinpoint the fault location. This work also serves as a validation for the previously developed AGLR approach in a practical scenario.


Author(s):  
S. D. Jawak ◽  
S. N. Panditrao ◽  
A. J. Luis

This work uses the canopy height model (CHM) based workflow for individual tree crown delineation and 3D feature extraction approach (Overwatch Geospatial's proprietary algorithm) for building feature delineation from high-density light detection and ranging (LiDAR) point cloud data in an urban environment and evaluates its accuracy by using very high-resolution panchromatic (PAN) (spatial) and 8-band (multispectral) WorldView-2 (WV-2) imagery. LiDAR point cloud data over San Francisco, California, USA, recorded in June 2010, was used to detect tree and building features by classifying point elevation values. The workflow employed includes resampling of LiDAR point cloud to generate a raster surface or digital terrain model (DTM), generation of a hill-shade image and an intensity image, extraction of digital surface model, generation of bare earth digital elevation model (DEM) and extraction of tree and building features. First, the optical WV-2 data and the LiDAR intensity image were co-registered using ground control points (GCPs). The WV-2 rational polynomial coefficients model (RPC) was executed in ERDAS Leica Photogrammetry Suite (LPS) using supplementary *.RPB file. In the second stage, ortho-rectification was carried out using ERDAS LPS by incorporating well-distributed GCPs. The root mean square error (RMSE) for the WV-2 was estimated to be 0.25 m by using more than 10 well-distributed GCPs. In the second stage, we generated the bare earth DEM from LiDAR point cloud data. In most of the cases, bare earth DEM does not represent true ground elevation. Hence, the model was edited to get the most accurate DEM/ DTM possible and normalized the LiDAR point cloud data based on DTM in order to reduce the effect of undulating terrain. We normalized the vegetation point cloud values by subtracting the ground points (DEM) from the LiDAR point cloud. A normalized digital surface model (nDSM) or CHM was calculated from the LiDAR data by subtracting the DEM from the DSM. The CHM or the normalized DSM represents the absolute height of all aboveground urban features relative to the ground. After normalization, the elevation value of a point indicates the height from the ground to the point. The above-ground points were used for tree feature and building footprint extraction. In individual tree extraction, first and last return point clouds were used along with the bare earth and building footprint models discussed above. In this study, scene dependent extraction criteria were employed to improve the 3D feature extraction process. LiDAR-based refining/ filtering techniques used for bare earth layer extraction were crucial for improving the subsequent 3D features (tree and building) feature extraction. The PAN-sharpened WV-2 image (with 0.5 m spatial resolution) was used to assess the accuracy of LiDAR-based 3D feature extraction. Our analysis provided an accuracy of 98 % for tree feature extraction and 96 % for building feature extraction from LiDAR data. This study could extract total of 15143 tree features using CHM method, out of which total of 14841 were visually interpreted on PAN-sharpened WV-2 image data. The extracted tree features included both shadowed (total 13830) and non-shadowed (total 1011). We note that CHM method could overestimate total of 302 tree features, which were not observed on the WV-2 image. One of the potential sources for tree feature overestimation was observed in case of those tree features which were adjacent to buildings. In case of building feature extraction, the algorithm could extract total of 6117 building features which were interpreted on WV-2 image, even capturing buildings under the trees (total 605) and buildings under shadow (total 112). Overestimation of tree and building features was observed to be limiting factor in 3D feature extraction process. This is due to the incorrect filtering of point cloud in these areas. One of the potential sources of overestimation was the man-made structures, including skyscrapers and bridges, which were confounded and extracted as buildings. This can be attributed to low point density at building edges and on flat roofs or occlusions due to which LiDAR cannot give as much precise planimetric accuracy as photogrammetric techniques (in segmentation) and lack of optimum use of textural information as well as contextual information (especially at walls which are away from roof) in automatic extraction algorithm. In addition, there were no separate classes for bridges or the features lying inside the water and multiple water height levels were also not considered. Based on these inferences, we conclude that the LiDAR-based 3D feature extraction supplemented by high resolution satellite data is a potential application which can be used for understanding and characterization of urban setup.


2020 ◽  
Author(s):  
Meiert W. Grootes ◽  
Christiaan Meijer ◽  
Zsofia Koma ◽  
Bouwe Andela ◽  
Elena Ranguelova ◽  
...  

<p>LiDAR as a remote sensing technology, enabling the rapid 3D characterization of an area from an air- or spaceborne platform, has become a mainstream tool in the (bio)geosciences and related disciplines. For instance, LiDAR-derived metrics are used for characterizing vegetation type, structure, and prevalence and are widely employed across ecosystem research, forestry, and ecology/biology. Furthermore, these types of metrics are key candidates in the quest for Essential Biodiversity Variables (EBVs) suited to quantifying habitat structure, reflecting the importance of this property in assessing and monitoring the biodiversity of flora and fauna, and consequently in informing policy to safeguard it in the light of climate change an human impact.</p><p>In all these use cases, the power of LiDAR point cloud datasets resides in the information encoded within the spatial distribution of LiDAR returns, which can be extracted by calculating domain-specific statistical/ensemble properties of well-defined subsets of points.  </p><p>Facilitated by technological advances, the volume of point cloud data sets provided by LiDAR has steadily increased, with modern airborne laser scanning surveys now providing high-resolution, (super-)national scale datasets, tens to hundreds of terabytes in size and encompassing hundreds of billions of individual points, many of which are available as open data.</p><p>Representing a trove of data and, for the first time, enabling the study of ecosystem structure at meter resolution over the extent of tens to hundreds of kilometers, these datasets represent highly valuable new resources. However, their scientific exploitation is hindered by the scarcity of Free Open Source Software (FOSS) tools capable of handling the challenges of accessing, processing, and extracting meaningful information from massive multi-terabyte datasets, as well as by the domain-specificity of any existing tools.</p><p>Here we present Laserchicken a FOSS, user-extendable, cross-platform Python tool for extracting user-defined statistical properties of flexibly defined subsets of point cloud data, aimed at enabling efficient, scalable, and distributed processing of multi-terabyte datasets. Laserchicken can be seamlessly employed on computing architectures ranging from desktop systems to distributed clusters, and supports standard point cloud and geo-data formats (LAS/LAZ, PLY, GeoTIFF, etc.) making it compatible with a wide range of (FOSS) tools for geoscience.</p><p>The Laserchicken feature extraction tool is complemented by a FOSS Python processing pipeline tailored to the scientific exploitation of massive nation-scale point cloud datasets, together forming the Laserchicken framework.</p><p>The ability of the Laserchicken framework to unlock nation-scale LiDAR point cloud datasets is demonstrated on the basis of its use in the eEcoLiDAR project, a collaborative project between the University of Amsterdam and the Netherlands eScience Center. Within the eEcoLiDAR project, Laserchicken has been instrumental in defining classification methods for wetland habitats, as well as in facilitating the use of high-resolution vegetation structure metrics in modelling species distributions at national scales, with preliminary results highlighting the importance of including this information.</p><p>The Laserchicken Framework rests on FOSS, including the GDAL and PDAL libraries as well as numerous packages hosted on the open source Python Package Index (PyPI), and is itself also available as FOSS (https://pypi.org/project/laserchicken/ and https://github.com/eEcoLiDAR/ ).</p>


2013 ◽  
Vol 331 ◽  
pp. 631-635
Author(s):  
Ci Zhang ◽  
Guo Fan Hu ◽  
Xu Bing Chen

In reverse engineering, data pre-processing has played an increasingly important role for rebuilding the original 3D model. However, it is usually complex, time-consuming, and difficult to realize, as there are huge amounts of redundant 3D data existed in the gained point cloud. To find a solution for this issue, point cloud data processing and streamlining technologies are reviewed firstly. Secondly, a novel pre-processing approach is proposed in three steps: point cloud registration, regional 3D triangular mesh construction and point cloud filtering. And then, the projected hexagonal area and the closest projected point are defined. At last, a parabolic antenna model is employed as a case study. After pre-processing, the number of points are decreased from 4,066,282 to 449,806 under the constraint of triangular grid size h equaling to 2mm, i.e. about 1/9 size of the original point cloud. The result demonstrates its feasibility and efficiency.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Peerasit Mahasuwanchai ◽  
Chainarong Athisakul ◽  
Phasu Sairuamyat ◽  
Weerachart Tangchirapat ◽  
Sutat Leelataviwat ◽  
...  

This article presents an alternative method for the long-term monitoring of heritage pagodas in Thailand. In this method, terrestrial laser scanning (TLS) is used in combination with permanent survey markers. The Wat (temple) Krachee in the Ayutthaya Province of Thailand was chosen as a case study. This temple has several fantastic elements, including an inverted bell-shaped pagoda, two intertwined trees growing within it, and a chamber inside the pagoda. The preservation team working on the pagoda encountered a challenging problem and faced a decision to trim or not to trim the tree since it has a long-term effect on the pagoda’s structural stability. A high-accuracy terrestrial laser scanner was used to collect three-dimensional point cloud data. Permanent survey markers were constructed in 2018 to be used in long-term monitoring. The 3D surveying of the temple and the monitoring of the pagoda were carried out in five sessions during a period ending in 2020. A point cloud data analysis was performed to obtain the current dimensions, a displacement analysis, and the pagoda leaning angle. The results revealed that the terrestrial laser scanner is a high-performance piece of equipment offering efficient evaluation and long-term monitoring. However, in this study, permanent survey markers were also required as a benchmark for constraining each monitoring session. The 3D point cloud models could be matched with the assumption model elements to evaluate the damaged shape and to determine the original form. The significant elements of an inverted bell-shaped pagoda were investigated. Trimming the tree was found to cause the leaning angle of the pagoda to decrease. An equation was developed for predicting the leaning angle of the Wat Krachee pagoda for preservation and restoration planning in the future. From the results of this study, it is recommended that periodic monitoring should continue in order to preserve Thai pagodas in their original forms.


Symmetry ◽  
2020 ◽  
Vol 12 (12) ◽  
pp. 1955
Author(s):  
Emil Dumic ◽  
Luis A. da Silva Cruz

This paper presents a summary of recent progress in compression, subjective assessment and objective quality measures of point cloud representations of three dimensional visual information. Different existing point cloud datasets, as well as discusses the protocols that have been proposed to evaluate the subjective quality of point cloud data. Several geometry and attribute point cloud data objective quality measures are also presented and described. A case study on the evaluation of subjective quality of point clouds in two laboratories is presented. Six original point clouds degraded with G-PCC and V-PCC point cloud compression and five degradation levels were subjectively evaluated, showing high inter-laboratory correlation. Furthermore, performance of several geometry-based objective quality measures applied to the same data are described, concluding that the highest correlation with subjective scores is obtained using point-to-plane measures. Finally, several current challenges and future research directions on point clouds compression and quality evaluation are discussed.


Author(s):  
G. Tucci ◽  
S. Rihal ◽  
M. Betti ◽  
A. Conti ◽  
L. Fiorini ◽  
...  

<p><strong>Abstract.</strong> The paper presents the case study of the survey of the sail vaults of the main building of the Forest Research Institute in Dehradun. The building has been acquired with photogrammetric and laser scanner techniques during the Ground Based 3D Modelling (Photogrammetry and TLS) tutorial, at the ISPRS TC V Mid-term Symposium held in Dehradun, India in November 2018. The acquired data was then used for a structural evaluation of masonry vaults. The 3D model, built using the point cloud data, has been used in an open source finite element analysis software to develop a numerical model and comparative analyses have been carried out. The objective of numerical analysis is assessing both the benefits of structural meshes generated directly from point cloud data and the structural behaviour of the masonry vaults.</p>


Author(s):  
P. M. Mat Zam ◽  
N. A. Fuad ◽  
A. R. Yusoff ◽  
Z. Majid

<p><strong>Abstract.</strong> Nowadays, Terrestrial Laser Scanning (TLS) technology is gaining popularity in monitoring and predicting the movement of landslide due to the capability of high-speed data capture without requiring a direct contact with the monitored surface. It offers very high density of point cloud data in high resolution and also can be an effective tool in detecting the surface movement of the landslide area. The aim of this research is to determine the optimal level of scanning resolution for landslide monitoring using TLS. The Topcon Geodetic Laser Scanner (GLS) 2000 was used in this research to obtain the three dimensional (3D) point cloud data of the landslide area. Four types of resolution were used during scanning operation which were consist of very high, high, medium and low resolutions. After done with the data collection, the point clouds datasets were undergone the process of registration and filtering using ScanMaster software. After that, the registered point clouds datasets were analyzed using CloudCompare software. Based on the results obtained, the accuracy of TLS point cloud data between picking point manually and computed automatically by ScanMaster software shows the maximum Root Mean Square (RMS) value of coordinate differences were 0.013<span class="thinspace"></span>m in very high resolution, 0.017<span class="thinspace"></span>m in high resolution, 0.031<span class="thinspace"></span>m in medium resolution and 0.052<span class="thinspace"></span>m in low resolution respectively. Meanwhile, the accuracy of TLS point cloud data between picking point manually and total station data using intersection method shows the maximum RMS values of coordinate differences were 0.013<span class="thinspace"></span>m in very high resolution, 0.018<span class="thinspace"></span>m in high resolution, 0.033<span class="thinspace"></span>m in medium resolution and 0.054<span class="thinspace"></span>m in low resolution respectively. Hence, it can be concluded that the high or very high resolution is needed for landslide monitoring using Topcon GLS-2000 which can provide more accurate data in slope result, while the low and medium resolutions is not suitable for landslide monitoring due to the accuracy of TLS point cloud data that will decreased when the resolution value is increased.</p>


Sign in / Sign up

Export Citation Format

Share Document