scholarly journals COMPARATIVE ANALYSIS OF DATA STRUCTURES FOR STORING MASSIVE TINS IN A DBMS

Author(s):  
K. Kumar ◽  
H. Ledoux ◽  
J. Stoter

Point cloud data are an important source for 3D geoinformation. Modern day 3D data acquisition and processing techniques such as airborne laser scanning and multi-beam echosounding generate billions of 3D points for simply an area of few square kilometers. With the size of the point clouds exceeding the billion mark for even a small area, there is a need for their efficient storage and management. These point clouds are sometimes associated with attributes and constraints as well. Storing billions of 3D points is currently possible which is confirmed by the initial implementations in Oracle Spatial SDO PC and the PostgreSQL Point Cloud extension. But to be able to analyse and extract useful information from point clouds, we need more than just points i.e. we require the surface defined by these points in space. There are different ways to represent surfaces in GIS including grids, TINs, boundary representations, etc. In this study, we investigate the database solutions for the storage and management of massive TINs. The classical (face and edge based) and compact (star based) data structures are discussed at length with reference to their structure, advantages and limitations in handling massive triangulations and are compared with the current solution of PostGIS Simple Feature. The main test dataset is the TIN generated from third national elevation model of the Netherlands (AHN3) with a point density of over 10 points/m<sup>2</sup>. PostgreSQL/PostGIS DBMS is used for storing the generated TIN. The data structures are tested with the generated TIN models to account for their geometry, topology, storage, indexing, and loading time in a database. Our study is useful in identifying what are the limitations of the existing data structures for storing massive TINs and what is required to optimise these structures for managing massive triangulations in a database.

Author(s):  
K. Kumar ◽  
H. Ledoux ◽  
J. Stoter

Point cloud data are an important source for 3D geoinformation. Modern day 3D data acquisition and processing techniques such as airborne laser scanning and multi-beam echosounding generate billions of 3D points for simply an area of few square kilometers. With the size of the point clouds exceeding the billion mark for even a small area, there is a need for their efficient storage and management. These point clouds are sometimes associated with attributes and constraints as well. Storing billions of 3D points is currently possible which is confirmed by the initial implementations in Oracle Spatial SDO PC and the PostgreSQL Point Cloud extension. But to be able to analyse and extract useful information from point clouds, we need more than just points i.e. we require the surface defined by these points in space. There are different ways to represent surfaces in GIS including grids, TINs, boundary representations, etc. In this study, we investigate the database solutions for the storage and management of massive TINs. The classical (face and edge based) and compact (star based) data structures are discussed at length with reference to their structure, advantages and limitations in handling massive triangulations and are compared with the current solution of PostGIS Simple Feature. The main test dataset is the TIN generated from third national elevation model of the Netherlands (AHN3) with a point density of over 10 points/m&lt;sup&gt;2&lt;/sup&gt;. PostgreSQL/PostGIS DBMS is used for storing the generated TIN. The data structures are tested with the generated TIN models to account for their geometry, topology, storage, indexing, and loading time in a database. Our study is useful in identifying what are the limitations of the existing data structures for storing massive TINs and what is required to optimise these structures for managing massive triangulations in a database.


2021 ◽  
Vol 13 (11) ◽  
pp. 2195
Author(s):  
Shiming Li ◽  
Xuming Ge ◽  
Shengfu Li ◽  
Bo Xu ◽  
Zhendong Wang

Today, mobile laser scanning and oblique photogrammetry are two standard urban remote sensing acquisition methods, and the cross-source point-cloud data obtained using these methods have significant differences and complementarity. Accurate co-registration can make up for the limitations of a single data source, but many existing registration methods face critical challenges. Therefore, in this paper, we propose a systematic incremental registration method that can successfully register MLS and photogrammetric point clouds in the presence of a large number of missing data, large variations in point density, and scale differences. The robustness of this method is due to its elimination of noise in the extracted linear features and its 2D incremental registration strategy. There are three main contributions of our work: (1) the development of an end-to-end automatic cross-source point-cloud registration method; (2) a way to effectively extract the linear feature and restore the scale; and (3) an incremental registration strategy that simplifies the complex registration process. The experimental results show that this method can successfully achieve cross-source data registration, while other methods have difficulty obtaining satisfactory registration results efficiently. Moreover, this method can be extended to more point-cloud sources.


Author(s):  
Y. Hori ◽  
T. Ogawa

The implementation of laser scanning in the field of archaeology provides us with an entirely new dimension in research and surveying. It allows us to digitally recreate individual objects, or entire cities, using millions of three-dimensional points grouped together in what is referred to as "point clouds". In addition, the visualization of the point cloud data, which can be used in the final report by archaeologists and architects, should usually be produced as a JPG or TIFF file. Not only the visualization of point cloud data, but also re-examination of older data and new survey of the construction of Roman building applying remote-sensing technology for precise and detailed measurements afford new information that may lead to revising drawings of ancient buildings which had been adduced as evidence without any consideration of a degree of accuracy, and finally can provide new research of ancient buildings. We used laser scanners at fields because of its speed, comprehensive coverage, accuracy and flexibility of data manipulation. Therefore, we “skipped” many of post-processing and focused on the images created from the meta-data simply aligned using a tool which extended automatic feature-matching algorithm and a popular renderer that can provide graphic results.


Author(s):  
Gülhan Benli

Since the 2000s, terrestrial laser scanning, as one of the methods used to document historical edifices in protected areas, has taken on greater importance because it mitigates the difficulties associated with working on large areas and saves time while also making it possible to better understand all the particularities of the area. Through this technology, comprehensive point data (point clouds) about the surface of an object can be generated in a highly accurate three-dimensional manner. Furthermore, with the proper software this three-dimensional point cloud data can be transformed into three-dimensional rendering/mapping/modeling and quantitative orthophotographs. In this chapter, the study will present the results of terrestrial laser scanning and surveying which was used to obtain three-dimensional point clouds through three-dimensional survey measurements and scans of silhouettes of streets in Fatih in Historic Peninsula in Istanbul, which were then transposed into survey images and drawings. The study will also cite examples of the facade mapping using terrestrial laser scanning data in Istanbul Historic Peninsula Project.


2021 ◽  
Vol 13 (13) ◽  
pp. 2476
Author(s):  
Hiroshi Masuda ◽  
Yuichiro Hiraoka ◽  
Kazuto Saito ◽  
Shinsuke Eto ◽  
Michinari Matsushita ◽  
...  

With the use of terrestrial laser scanning (TLS) in forest stands, surveys are now equipped to obtain dense point cloud data. However, the data range, i.e., the number of points, often reaches the billions or even higher, exceeding random access memory (RAM) limits on common computers. Moreover, the processing time often also extends beyond acceptable processing lengths. Thus, in this paper, we present a new method of efficiently extracting stem traits from huge point cloud data obtained by TLS, without subdividing or downsampling the point clouds. In this method, each point cloud is converted into a wireframe model by connecting neighboring points on the same continuous surface, and three-dimensional points on stems are resampled as cross-sectional points of the wireframe model in an out-of-core manner. Since the data size of the section points is much smaller than the original point clouds, stem traits can be calculated from the section points on a common computer. With the study method, 1381 tree stems were calculated from 3.6 billion points in ~20 min on a common computer. To evaluate the accuracy of this method, eight targeted trees were cut down and sliced at 1-m intervals; actual stem traits were then compared to those calculated from point clouds. The experimental results showed that the efficiency and accuracy of the proposed method are sufficient for practical use in various fields, including forest management and forest research.


2021 ◽  
Vol 30 ◽  
pp. 126-130
Author(s):  
Jan Voříšek ◽  
Bořek Patzák ◽  
Edita Dvořáková ◽  
Daniel Rypl

Laser scanning is used widely in architecture and construction to document existing buildings by providing accurate data for creating a 3D model. The output is a set of data points in space, so-called point cloud. While point clouds can be directly rendered and inspected, they do not hold any semantics. Typically, engineers manually obtain floor plans, structural models, or the whole BIM model, which is a very time-consuming task for large building projects. In this contribution, we present the design and concept of a PointCloud2BIM library [1]. It provides a set of algorithms for automated or user assisted detection of fundamental entities from scanned point cloud data sets, such as floors, rooms, walls, and openings, and identification of the mutual relationships between them. The entity detection is based on a reasonable degree of human interaction (i.e., expected wall thickness). The results reside in a platform-agnostic JSON database allowing future integration into any existing BIM software.


Author(s):  
Hoang Long Nguyen ◽  
David Belton ◽  
Petra Helmholz

The demand for accurate spatial data has been increasing rapidly in recent years. Mobile laser scanning (MLS) systems have become a mainstream technology for measuring 3D spatial data. In a MLS point cloud, the point clouds densities of captured point clouds of interest features can vary: they can be sparse and heterogeneous or they can be dense. This is caused by several factors such as the speed of the carrier vehicle and the specifications of the laser scanner(s). The MLS point cloud data needs to be processed to get meaningful information e.g. segmentation can be used to find meaningful features (planes, corners etc.) that can be used as the inputs for many processing steps (e.g. registration, modelling) that are more difficult when just using the point cloud. Planar features are dominating in manmade environments and they are widely used in point clouds registration and calibration processes. There are several approaches for segmentation and extraction of planar objects available, however the proposed methods do not focus on properly segment MLS point clouds automatically considering the different point densities. This research presents the extension of the segmentation method based on planarity of the features. This proposed method was verified using both simulated and real MLS point cloud datasets. The results show that planar objects in MLS point clouds can be properly segmented and extracted by the proposed segmentation method.


2019 ◽  
Vol 11 (7) ◽  
pp. 836 ◽  
Author(s):  
Erzhuo Che ◽  
Michael Olsen

Mobile laser scanning (MLS, or mobile lidar) is a 3-D data acquisition technique that has been widely used in a variety of applications in recent years due to its high accuracy and efficiency. However, given the large data volume and complexity of the point clouds, processing MLS data can be still challenging with respect to effectiveness, efficiency, and versatility. This paper proposes an efficient MLS data processing framework for general purposes consisting of three main steps: trajectory reconstruction, scan pattern grid generation, and Mo-norvana (Mobile Normal Variation Analysis) segmentation. We present a novel approach to reconstructing the scanner trajectory, which can then be used to structure the point cloud data into a scan pattern grid. By exploiting the scan pattern grid, point cloud segmentation can be performed using Mo-norvana, which is developed based on our previous work for processing Terrestrial Laser Scanning (TLS) data, normal variation analysis (Norvana). In this work, with an unorganized MLS point cloud as input, the proposed framework can complete various tasks that may be desired in many applications including trajectory reconstruction, data structuring, data visualization, edge detection, feature extraction, normal estimation, and segmentation. The performance of the proposed procedures are experimentally evaluated both qualitatively and quantitatively using multiple MLS datasets via the results of trajectory reconstruction, visualization, and segmentation. The efficiency of the proposed method is demonstrated to be able to handle a large dataset stably with a fast computation speed (about 1 million pts/sec. with 8 threads) by taking advantage of parallel programming.


2019 ◽  
Vol 8 (10) ◽  
pp. 460
Author(s):  
Gracchi ◽  
Gigli ◽  
Noël ◽  
Jaboyedoff ◽  
Madiai ◽  
...  

In this paper, a MATLAB tool for the automatic detection of the best locations to install a wireless sensor network (WSN) is presented. The implemented code works directly on high-resolution 3D point clouds and aims to help in positioning sensors that are part of a network requiring inter-visibility, namely, a clear line of sight (LOS). Indeed, with the development of LiDAR and Structure from Motion technologies, there is an opportunity to directly use 3D point cloud data to perform visibility analyses. By doing so, many disadvantages of traditional modelling and analysis methods can be bypassed. The algorithm points out the optimal deployment of devices following mainly two criteria: inter-visibility (using a modified version of the Hidden Point Removal operator) and inter-distance. Furthermore, an option to prioritize significant areas is provided. The proposed method was first validated on an artificial 3D model, and then on a landslide 3D point cloud acquired from terrestrial laser scanning for the real positioning of an ultrawide-band WSN already installed in 2016. The comparison between collected data and data acquired by the WSN installed following traditional patterns has demonstrated its ability for the optimal deployment of a WSN requiring inter-visibility.


Sensors ◽  
2019 ◽  
Vol 19 (16) ◽  
pp. 3466 ◽  
Author(s):  
Balado ◽  
Martínez-Sánchez ◽  
Arias ◽  
Novo

In the near future, the communication between autonomous cars will produce a network of sensors that will allow us to know the state of the roads in real time. Lidar technology, upon which most autonomous cars are based, allows the acquisition of 3D geometric information of the environment. The objective of this work is to use point clouds acquired by Mobile Laser Scanning (MLS) to segment the main elements of road environment (road surface, ditches, guardrails, fences, embankments, and borders) through the use of PointNet. Previously, the point cloud was automatically divided into sections in order for semantic segmentation to be scalable to different case studies, regardless of their shape or length. An overall accuracy of 92.5% has been obtained, but with large variations between classes. Elements with a greater number of points have been segmented more effectively than the other elements. In comparison with other point-by-point extraction and ANN-based classification techniques, the same success rates have been obtained for road surfaces and fences, and better results have been obtained for guardrails. Semantic segmentation with PointNet is suitable when segmenting the scene as a whole, however, if certain classes have more interest, there are other alternatives that do not need a high training cost.


Sign in / Sign up

Export Citation Format

Share Document