scholarly journals Modeling and Processing of Smart Point Clouds of Cultural Relics with Complex Geometries

2021 ◽  
Vol 10 (9) ◽  
pp. 617
Author(s):  
Su Yang ◽  
Miaole Hou ◽  
Ahmed Shaker ◽  
Songnian Li

The digital documentation of cultural relics plays an important role in archiving, protection, and management. In the field of cultural heritage, three-dimensional (3D) point cloud data is effective at expressing complex geometric structures and geometric details on the surface of cultural relics, but lacks semantic information. To elaborate the geometric information of cultural relics and add meaningful semantic information, we propose a modeling and processing method of smart point clouds of cultural relics with complex geometries. An information modeling framework for complex geometric cultural relics was designed based on the concept of smart point clouds, in which 3D point cloud data are organized through the time dimension and different spatial scales indicating different geometric details. The proposed model allows smart point clouds or a subset to be linked with semantic information or related documents. As such, this novel information modeling framework can be used to describe rich semantic information and high-level details of geometry. The proposed information model not only expresses the complex geometric structure of the cultural relics and the geometric details on the surface, but also has rich semantic information, and can even be associated with documents. A case study of the Dazu Thousand-Hand Bodhisattva Statue, which is characterized by a variety of complex geometries, reveals that our proposed framework is capable of modeling and processing the statue with excellent applicability and expansibility. This work provides insights into the sustainable development of cultural heritage protection globally.

Author(s):  
M. Samie Tootooni ◽  
Ashley Dsouza ◽  
Ryan Donovan ◽  
Prahalad K. Rao ◽  
Zhenyu (James) Kong ◽  
...  

This work proposes a novel approach for geometric integrity assessment of additive manufactured (AM, 3D printed) components, exemplified by acrylonitrile butadiene styrene (ABS) polymer parts made using fused filament fabrication (FFF) process. The following two research questions are addressed in this paper: (1) what is the effect of FFF process parameters, specifically, infill percentage (If) and extrusion temperature (Te) on geometric integrity of ABS parts?; and (2) what approach is required to differentiate AM parts with respect to their geometric integrity based on sparse sampling from a large (∼ 2 million data points) laser-scanned point cloud dataset? To answer the first question, ABS parts are produced by varying two FFF parameters, namely, infill percentage (If) and extrusion temperature (Te) through design of experiments. The part geometric integrity is assessed with respect to key geometric dimensioning and tolerancing (GD&T) features, such as flatness, circularity, cylindricity, root mean square deviation, and in-tolerance percentage. These GD&T parameters are obtained by laser scanning of the FFF parts. Concurrently, coordinate measurements of the part geometry in the form of 3D point cloud data is also acquired. Through response surface statistical analysis of this experimental data it was found that discrimination of geometric integrity between FFF parts based on GD&T parameters and process inputs alone was unsatisfactory (regression R2 < 50%). This directly motivates the second question. Accordingly, a data-driven analytical approach is proposed to classify the geometric integrity of FFF parts using minimal number (< 2% of total) of laser-scanned 3D point cloud data. The approach uses spectral graph theoretic Laplacian eigenvalues extracted from the 3D point cloud data in conjunction with a modeling framework called sparse representation to classify FFF part quality contingent on the geometric integrity. The practical outcome of this work is a method that can quickly classify the part geometric integrity with minimal point cloud data and high classification fidelity (F-score > 95%), which bypasses tedious coordinate measurement.


2020 ◽  
Vol 12 (11) ◽  
pp. 1800 ◽  
Author(s):  
Maarten Bassier ◽  
Maarten Vergauwen

The processing of remote sensing measurements to Building Information Modeling (BIM) is a popular subject in current literature. An important step in the process is the enrichment of the geometry with the topology of the wall observations to create a logical model. However, this remains an unsolved task as methods struggle to deal with the noise, incompleteness and the complexity of point cloud data of building scenes. Current methods impose severe abstractions such as Manhattan-world assumptions and single-story procedures to overcome these obstacles, but as a result, a general data processing approach is still missing. In this paper, we propose a method that solves these shortcomings and creates a logical BIM model in an unsupervised manner. More specifically, we propose a connection evaluation framework that takes as input a set of preprocessed point clouds of a building’s wall observations and compute the best fit topology between them. We transcend the current state of the art by processing point clouds of both straight, curved and polyline-based walls. Also, we consider multiple connection types in a novel reasoning framework that decides which operations are best fit to reconstruct the topology of the walls. The geometry and topology produced by our method is directly usable by BIM processes as it is structured conform the IFC data structure. The experimental results conducted on the Stanford 2D-3D-Semantics dataset (2D-3D-S) show that the proposed method is a promising framework to reconstruct complex multi-story wall elements in an unsupervised manner.


Author(s):  
M. Bassier ◽  
R. Klein ◽  
B. Van Genechten ◽  
M. Vergauwen

The automated reconstruction of Building Information Modeling (BIM) objects from point cloud data is still ongoing research. A key aspect is the creation of accurate wall geometry as it forms the basis for further reconstruction of objects in a BIM. After segmenting and classifying the initial point cloud, the labelled segments are processed and the wall topology is reconstructed. However, the preocedure is challenging due to noise, occlusions and the complexity of the input data.<br>In this work, a method is presented to automatically reconstruct consistent wall geometry from point clouds. More specifically, the use of room information is proposed to aid the wall topology creation. First, a set of partial walls is constructed based on classified planar primitives. Next, the rooms are identified using the retrieved wall information along with the floors and ceilings. The wall topology is computed by the intersection of the partial walls conditioned on the room information. The final wall geometry is defined by creating IfcWallStandardCase objects conform the IFC4 standard. The result is a set of walls according to the as-built conditions of a building. The experiments prove that the used method is a reliable framework for wall reconstruction from unstructured point cloud data. Also, the implementation of room information reduces the rate of false positives for the wall topology. Given the walls, ceilings and floors, 94% of the rooms is correctly identified. A key advantage of the proposed method is that it deals with complex rooms and is not bound to single storeys.


2010 ◽  
Vol 22 (2) ◽  
pp. 158-166 ◽  
Author(s):  
Taro Suzuki ◽  
◽  
Yoshiharu Amano ◽  
Takumi Hashizume

This paper describes outdoor localization for a mobile robot using a laser scanner and three-dimensional (3D) point cloud data. A Mobile Mapping System (MMS) measures outdoor 3D point clouds easily and precisely. The full six-dimensional state of a mobile robot is estimated combining dead reckoning and 3D point cloud data. Two-dimensional (2D) position and orientation are extended to 3D using 3D point clouds assuming that the mobile robot remains in continuous contact with the road surface. Our approach applies a particle filter to correct position error in the laser measurement model in 3D point cloud space. Field experiments were conducted to evaluate the accuracy of our proposal. As the result of the experiment, it was confirmed that a localization precision of 0.2 m (RMS) is possible using our proposal.


Author(s):  
M. Bassier ◽  
M. Vergauwen

<p><strong>Abstract.</strong> The automated reconstruction of Building Information Modeling (BIM) objects from point cloud data is still ongoing research. A key aspect is retrieving the proper observations for each object. After segmenting and classifying the initial point cloud, the labeled segments should be clustered according to their respective objects. However, this procedure is challenging due to noise, occlusions and the associativity between different objects. This is especially important for wall geometry as it forms the basis for further BIM reconstruction.</p><p> In this work, a method is presented to automatically group wall segments derived from point clouds according to the proper walls of a building. More specifically, a Conditional Random Field is employed that evaluates the context of each observation in order to determine which wall it belongs too. The emphasis is on the clustering of highly associative walls as this topic currently is a gap in the body of knowledge. First a set of classified planar primitives is obtained using algorithms developed in prior work. Next, both local and contextual features are extracted based on the nearest neighbors and a number of seeds that are heuristically determined. The final wall clusters are then computed by decoding the graph and thus the most likely configuration of the observations. The experiments prove that the used method is a promising framework for wall clustering from unstructured point cloud data. Compared to a conventional region growing method, the proposed method significantly reduces the rate of false positives, resulting in better wall clusters. A key advantage of the proposed method is its capability of dealing with complex wall geometry in entire buildings opposed to the presented methods in current literature.</p>


2019 ◽  
Vol 11 (22) ◽  
pp. 2715 ◽  
Author(s):  
Chuyen Nguyen ◽  
Michael J. Starek ◽  
Philippe Tissot ◽  
James Gibeaut

Dense three-dimensional (3D) point cloud data sets generated by Terrestrial Laser Scanning (TLS) and Unmanned Aircraft System based Structure-from-Motion (UAS-SfM) photogrammetry have different characteristics and provide different representations of the underlying land cover. While there are differences, a common challenge associated with these technologies is how to best take advantage of these large data sets, often several hundred million points, to efficiently extract relevant information. Given their size and complexity, the data sets cannot be efficiently and consistently separated into homogeneous features without the use of automated segmentation algorithms. This research aims to evaluate the performance and generalizability of an unsupervised clustering method, originally developed for segmentation of TLS point cloud data in marshes, by extending it to UAS-SfM point clouds. The combination of two sets of features are extracted from both datasets: “core” features that can be extracted from any 3D point cloud and “sensor specific” features unique to the imaging modality. Comparisons of segmented results based on producer’s and user’s accuracies allow for identifying the advantages and limitations of each dataset and determining the generalization of the clustering method. The producer’s accuracies suggest that UAS-SfM (94.7%) better represents tidal flats, while TLS (99.5%) is slightly more suitable for vegetated areas. The users’ accuracies suggest that UAS-SfM outperforms TLS in vegetated areas with 98.6% of those points identified as vegetation actually falling in vegetated areas whereas TLS outperforms UAS-SfM in tidal flat areas with 99.2% user accuracy. Results demonstrate that the clustering method initially developed for TLS point cloud data transfers well to UAS-SfM point cloud data to enable consistent and accurate segmentation of marsh land cover via an unsupervised method.


Author(s):  
R. Boerner ◽  
M. Kröhnert

3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. &lt;br&gt;&lt;br&gt; The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.


Author(s):  
C. Beil ◽  
T. Kutzner ◽  
B. Schwab ◽  
B. Willenborg ◽  
A. Gawronski ◽  
...  

Abstract. A range of different and increasingly accessible acquisition methods, the possibility for frequent data updates of large areas, and a simple data structure are some of the reasons for the popularity of three-dimensional (3D) point cloud data. While there are multiple techniques for segmenting and classifying point clouds, capabilities of common data formats such as LAS for providing semantic information are mostly limited to assigning points to a certain category (classification). However, several fields of application, such as digital urban twins used for simulations and analyses, require more detailed semantic knowledge. This can be provided by semantic 3D city models containing hierarchically structured semantic and spatial information. Although semantic models are often reconstructed from point clouds, they are usually geometrically less accurate due to generalization processes. First, point cloud data structures / formats are discussed with respect to their semantic capabilities. Then, a new approach for integrating point clouds with semantic 3D city models is presented, consequently combining respective advantages of both data types. In addition to elaborate (and established) semantic concepts for several thematic areas, the new version 3.0 of the international Open Geospatial Consortium (OGC) standard CityGML also provides a PointCloud module. In this paper a scheme is shown, how CityGML 3.0 can be used to provide semantic structures for point clouds (directly or stored in a separate LAS file). Methods and metrics to automatically assign points to corresponding Level of Detail (LoD)2 or LoD3 models are presented. Subsequently, dataset examples implementing these concepts are provided for download.


2020 ◽  
Vol 12 (14) ◽  
pp. 2224 ◽  
Author(s):  
Maarten Bassier ◽  
Maarten Vergauwen ◽  
Florent Poux

Interpreting 3D point cloud data of the interior and exterior of buildings is essential for automated navigation, interaction and 3D reconstruction. However, the direct exploitation of the geometry is challenging due to inherent obstacles such as noise, occlusions, sparsity or variance in the density. Alternatively, 3D mesh geometries derived from point clouds benefit from preprocessing routines that can surmount these obstacles and potentially result in more refined geometry and topology descriptions. In this article, we provide a rigorous comparison of both geometries for scene interpretation. We present an empirical study on the suitability of both geometries for the feature extraction and classification. More specifically, we study the impact for the retrieval of structural building components in a realistic environment which is a major endeavor in Building Information Modeling (BIM) reconstruction. The study runs on segment-based structuration of both geometries and shows that both achieve recognition rates over 75% F1 score when suitable features are used.


Sign in / Sign up

Export Citation Format

Share Document