scholarly journals Dense 3D displacement vector fields for point cloud-based landslide monitoring

Landslides ◽  
2021 ◽  
Author(s):  
Zan Gojcic ◽  
Lorenz Schmid ◽  
Andreas Wieser

AbstractWe propose a novel fully automated deformation analysis pipeline capable of estimating real 3D displacement vectors from point cloud data. Different from the traditional methods that establish displacements based on the proximity in the Euclidean space, our approach estimates dense 3D displacement vector fields by searching for corresponding points across the epochs in the space of 3D local feature descriptors. Due to this formulation, our method is also sensitive to motion and deformations that occur parallel to the underlying surface. By enabling efficient parallel processing, the proposed method can be applied to point clouds of arbitrary size. We compare our approach to the traditional methods on point cloud data of two landslides and show that while the traditional methods often underestimate the displacements, our method correctly estimates full 3D displacement vectors.

Author(s):  
P. M. Mat Zam ◽  
N. A. Fuad ◽  
A. R. Yusoff ◽  
Z. Majid

<p><strong>Abstract.</strong> Nowadays, Terrestrial Laser Scanning (TLS) technology is gaining popularity in monitoring and predicting the movement of landslide due to the capability of high-speed data capture without requiring a direct contact with the monitored surface. It offers very high density of point cloud data in high resolution and also can be an effective tool in detecting the surface movement of the landslide area. The aim of this research is to determine the optimal level of scanning resolution for landslide monitoring using TLS. The Topcon Geodetic Laser Scanner (GLS) 2000 was used in this research to obtain the three dimensional (3D) point cloud data of the landslide area. Four types of resolution were used during scanning operation which were consist of very high, high, medium and low resolutions. After done with the data collection, the point clouds datasets were undergone the process of registration and filtering using ScanMaster software. After that, the registered point clouds datasets were analyzed using CloudCompare software. Based on the results obtained, the accuracy of TLS point cloud data between picking point manually and computed automatically by ScanMaster software shows the maximum Root Mean Square (RMS) value of coordinate differences were 0.013<span class="thinspace"></span>m in very high resolution, 0.017<span class="thinspace"></span>m in high resolution, 0.031<span class="thinspace"></span>m in medium resolution and 0.052<span class="thinspace"></span>m in low resolution respectively. Meanwhile, the accuracy of TLS point cloud data between picking point manually and total station data using intersection method shows the maximum RMS values of coordinate differences were 0.013<span class="thinspace"></span>m in very high resolution, 0.018<span class="thinspace"></span>m in high resolution, 0.033<span class="thinspace"></span>m in medium resolution and 0.054<span class="thinspace"></span>m in low resolution respectively. Hence, it can be concluded that the high or very high resolution is needed for landslide monitoring using Topcon GLS-2000 which can provide more accurate data in slope result, while the low and medium resolutions is not suitable for landslide monitoring due to the accuracy of TLS point cloud data that will decreased when the resolution value is increased.</p>


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 884
Author(s):  
Chia-Ming Tsai ◽  
Yi-Horng Lai ◽  
Yung-Da Sun ◽  
Yu-Jen Chung ◽  
Jau-Woei Perng

Numerous sensors can obtain images or point cloud data on land, however, the rapid attenuation of electromagnetic signals and the lack of light in water have been observed to restrict sensing functions. This study expands the utilization of two- and three-dimensional detection technologies in underwater applications to detect abandoned tires. A three-dimensional acoustic sensor, the BV5000, is used in this study to collect underwater point cloud data. Some pre-processing steps are proposed to remove noise and the seabed from raw data. Point clouds are then processed to obtain two data types: a 2D image and a 3D point cloud. Deep learning methods with different dimensions are used to train the models. In the two-dimensional method, the point cloud is transferred into a bird’s eye view image. The Faster R-CNN and YOLOv3 network architectures are used to detect tires. Meanwhile, in the three-dimensional method, the point cloud associated with a tire is cut out from the raw data and is used as training data. The PointNet and PointConv network architectures are then used for tire classification. The results show that both approaches provide good accuracy.


Author(s):  
Y. Hori ◽  
T. Ogawa

The implementation of laser scanning in the field of archaeology provides us with an entirely new dimension in research and surveying. It allows us to digitally recreate individual objects, or entire cities, using millions of three-dimensional points grouped together in what is referred to as "point clouds". In addition, the visualization of the point cloud data, which can be used in the final report by archaeologists and architects, should usually be produced as a JPG or TIFF file. Not only the visualization of point cloud data, but also re-examination of older data and new survey of the construction of Roman building applying remote-sensing technology for precise and detailed measurements afford new information that may lead to revising drawings of ancient buildings which had been adduced as evidence without any consideration of a degree of accuracy, and finally can provide new research of ancient buildings. We used laser scanners at fields because of its speed, comprehensive coverage, accuracy and flexibility of data manipulation. Therefore, we “skipped” many of post-processing and focused on the images created from the meta-data simply aligned using a tool which extended automatic feature-matching algorithm and a popular renderer that can provide graphic results.


2021 ◽  
Vol 10 (9) ◽  
pp. 617
Author(s):  
Su Yang ◽  
Miaole Hou ◽  
Ahmed Shaker ◽  
Songnian Li

The digital documentation of cultural relics plays an important role in archiving, protection, and management. In the field of cultural heritage, three-dimensional (3D) point cloud data is effective at expressing complex geometric structures and geometric details on the surface of cultural relics, but lacks semantic information. To elaborate the geometric information of cultural relics and add meaningful semantic information, we propose a modeling and processing method of smart point clouds of cultural relics with complex geometries. An information modeling framework for complex geometric cultural relics was designed based on the concept of smart point clouds, in which 3D point cloud data are organized through the time dimension and different spatial scales indicating different geometric details. The proposed model allows smart point clouds or a subset to be linked with semantic information or related documents. As such, this novel information modeling framework can be used to describe rich semantic information and high-level details of geometry. The proposed information model not only expresses the complex geometric structure of the cultural relics and the geometric details on the surface, but also has rich semantic information, and can even be associated with documents. A case study of the Dazu Thousand-Hand Bodhisattva Statue, which is characterized by a variety of complex geometries, reveals that our proposed framework is capable of modeling and processing the statue with excellent applicability and expansibility. This work provides insights into the sustainable development of cultural heritage protection globally.


2021 ◽  
Vol 65 (1) ◽  
pp. 10501-1-10501-9
Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian ◽  
Xiushan Lu

Abstract The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2020 ◽  
Vol 9 (2) ◽  
pp. 72 ◽  
Author(s):  
Sami El-Mahgary ◽  
Juho-Pekka Virtanen ◽  
Hannu Hyyppä

The importance of being able to separate the semantics from the actual (X,Y,Z) coordinates in a point cloud has been actively brought up in recent research. However, there is still no widely used or accepted data layout paradigm on how to efficiently store and manage such semantic point cloud data. In this paper, we present a simple data layout that makes use the semantics and that allows for quick queries. The underlying idea is especially suited for a programming approach (e.g., queries programmed via Python) but we also present an even simpler implementation of the underlying technique on a well known relational database management system (RDBMS), namely, PostgreSQL. The obtained query results suggest that the presented approach can be successfully used to handle point and range queries on large points clouds.


2020 ◽  
Vol 12 (11) ◽  
pp. 1800 ◽  
Author(s):  
Maarten Bassier ◽  
Maarten Vergauwen

The processing of remote sensing measurements to Building Information Modeling (BIM) is a popular subject in current literature. An important step in the process is the enrichment of the geometry with the topology of the wall observations to create a logical model. However, this remains an unsolved task as methods struggle to deal with the noise, incompleteness and the complexity of point cloud data of building scenes. Current methods impose severe abstractions such as Manhattan-world assumptions and single-story procedures to overcome these obstacles, but as a result, a general data processing approach is still missing. In this paper, we propose a method that solves these shortcomings and creates a logical BIM model in an unsupervised manner. More specifically, we propose a connection evaluation framework that takes as input a set of preprocessed point clouds of a building’s wall observations and compute the best fit topology between them. We transcend the current state of the art by processing point clouds of both straight, curved and polyline-based walls. Also, we consider multiple connection types in a novel reasoning framework that decides which operations are best fit to reconstruct the topology of the walls. The geometry and topology produced by our method is directly usable by BIM processes as it is structured conform the IFC data structure. The experimental results conducted on the Stanford 2D-3D-Semantics dataset (2D-3D-S) show that the proposed method is a promising framework to reconstruct complex multi-story wall elements in an unsupervised manner.


Author(s):  
M. Bassier ◽  
R. Klein ◽  
B. Van Genechten ◽  
M. Vergauwen

The automated reconstruction of Building Information Modeling (BIM) objects from point cloud data is still ongoing research. A key aspect is the creation of accurate wall geometry as it forms the basis for further reconstruction of objects in a BIM. After segmenting and classifying the initial point cloud, the labelled segments are processed and the wall topology is reconstructed. However, the preocedure is challenging due to noise, occlusions and the complexity of the input data.<br>In this work, a method is presented to automatically reconstruct consistent wall geometry from point clouds. More specifically, the use of room information is proposed to aid the wall topology creation. First, a set of partial walls is constructed based on classified planar primitives. Next, the rooms are identified using the retrieved wall information along with the floors and ceilings. The wall topology is computed by the intersection of the partial walls conditioned on the room information. The final wall geometry is defined by creating IfcWallStandardCase objects conform the IFC4 standard. The result is a set of walls according to the as-built conditions of a building. The experiments prove that the used method is a reliable framework for wall reconstruction from unstructured point cloud data. Also, the implementation of room information reduces the rate of false positives for the wall topology. Given the walls, ceilings and floors, 94% of the rooms is correctly identified. A key advantage of the proposed method is that it deals with complex rooms and is not bound to single storeys.


Author(s):  
F. Poux ◽  
R. Neuville ◽  
P. Hallot ◽  
R. Billen

This paper proposes an interoperable model for managing high dimensional point clouds while integrating semantics. Point clouds from sensors are a direct source of information physically describing a 3D state of the recorded environment. As such, they are an exhaustive representation of the real world at every scale: 3D reality-based spatial data. Their generation is increasingly fast but processing routines and data models lack of knowledge to reason from information extraction rather than interpretation. The enhanced smart point cloud developed model allows to bring intelligence to point clouds via 3 connected meta-models while linking available knowledge and classification procedures that permits semantic injection. Interoperability drives the model adaptation to potentially many applications through specialized domain ontologies. A first prototype is implemented in Python and PostgreSQL database and allows to combine semantic and spatial concepts for basic hybrid queries on different point clouds.


Sign in / Sign up

Export Citation Format

Share Document