scholarly journals A Workflow to Extract the Geometry and Type of Vegetated Landscape Elements from Airborne LiDAR Point Clouds

2021 ◽  
Vol 13 (20) ◽  
pp. 4031
Author(s):  
Ine Rosier ◽  
Jan Diels ◽  
Ben Somers ◽  
Jos Van Orshoven

Rural European landscapes are characterized by a variety of vegetated landscape elements. Although it is often not their main function, they have the potential to affect river discharge and the frequency, extent, depth and duration of floods downstream by creating both hydrological discontinuities and connections across the landscape. Information about the extent to which individual landscape elements and their spatial location affect peak river discharge and flood frequency and severity in agricultural catchments under specific meteorological conditions is limited. This knowledge gap can partly be explained by the lack of exhaustive inventories of the presence, geometry, and hydrological traits of vegetated landscape elements (vLEs), which in turn is due to the lack of appropriate techniques and source data to produce such inventories and keep them up to date. In this paper, a multi-step methodology is proposed to delineate and classify vLEs based on LiDAR point cloud data in three study areas in Flanders, Belgium. We classified the LiDAR point cloud data into the classes ‘vegetated landscape element point’ and ‘other’ using a Random Forest model with an accuracy classification score ranging between 0.92 and 0.97. The landscape element objects were further classified into the classes ‘tree object’ and ‘shrub object’ using a Logistic Regression model with an area-based accuracy ranging between 0.34 and 0.95.

Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 884
Author(s):  
Chia-Ming Tsai ◽  
Yi-Horng Lai ◽  
Yung-Da Sun ◽  
Yu-Jen Chung ◽  
Jau-Woei Perng

Numerous sensors can obtain images or point cloud data on land, however, the rapid attenuation of electromagnetic signals and the lack of light in water have been observed to restrict sensing functions. This study expands the utilization of two- and three-dimensional detection technologies in underwater applications to detect abandoned tires. A three-dimensional acoustic sensor, the BV5000, is used in this study to collect underwater point cloud data. Some pre-processing steps are proposed to remove noise and the seabed from raw data. Point clouds are then processed to obtain two data types: a 2D image and a 3D point cloud. Deep learning methods with different dimensions are used to train the models. In the two-dimensional method, the point cloud is transferred into a bird’s eye view image. The Faster R-CNN and YOLOv3 network architectures are used to detect tires. Meanwhile, in the three-dimensional method, the point cloud associated with a tire is cut out from the raw data and is used as training data. The PointNet and PointConv network architectures are then used for tire classification. The results show that both approaches provide good accuracy.


Author(s):  
Y. Hori ◽  
T. Ogawa

The implementation of laser scanning in the field of archaeology provides us with an entirely new dimension in research and surveying. It allows us to digitally recreate individual objects, or entire cities, using millions of three-dimensional points grouped together in what is referred to as "point clouds". In addition, the visualization of the point cloud data, which can be used in the final report by archaeologists and architects, should usually be produced as a JPG or TIFF file. Not only the visualization of point cloud data, but also re-examination of older data and new survey of the construction of Roman building applying remote-sensing technology for precise and detailed measurements afford new information that may lead to revising drawings of ancient buildings which had been adduced as evidence without any consideration of a degree of accuracy, and finally can provide new research of ancient buildings. We used laser scanners at fields because of its speed, comprehensive coverage, accuracy and flexibility of data manipulation. Therefore, we “skipped” many of post-processing and focused on the images created from the meta-data simply aligned using a tool which extended automatic feature-matching algorithm and a popular renderer that can provide graphic results.


2021 ◽  
Vol 10 (9) ◽  
pp. 617
Author(s):  
Su Yang ◽  
Miaole Hou ◽  
Ahmed Shaker ◽  
Songnian Li

The digital documentation of cultural relics plays an important role in archiving, protection, and management. In the field of cultural heritage, three-dimensional (3D) point cloud data is effective at expressing complex geometric structures and geometric details on the surface of cultural relics, but lacks semantic information. To elaborate the geometric information of cultural relics and add meaningful semantic information, we propose a modeling and processing method of smart point clouds of cultural relics with complex geometries. An information modeling framework for complex geometric cultural relics was designed based on the concept of smart point clouds, in which 3D point cloud data are organized through the time dimension and different spatial scales indicating different geometric details. The proposed model allows smart point clouds or a subset to be linked with semantic information or related documents. As such, this novel information modeling framework can be used to describe rich semantic information and high-level details of geometry. The proposed information model not only expresses the complex geometric structure of the cultural relics and the geometric details on the surface, but also has rich semantic information, and can even be associated with documents. A case study of the Dazu Thousand-Hand Bodhisattva Statue, which is characterized by a variety of complex geometries, reveals that our proposed framework is capable of modeling and processing the statue with excellent applicability and expansibility. This work provides insights into the sustainable development of cultural heritage protection globally.


2021 ◽  
Vol 65 (1) ◽  
pp. 10501-1-10501-9
Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian ◽  
Xiushan Lu

Abstract The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2019 ◽  
Vol 11 (23) ◽  
pp. 2737 ◽  
Author(s):  
Minsu Kim ◽  
Seonkyung Park ◽  
Jeffrey Danielson ◽  
Jeffrey Irwin ◽  
Gregory Stensaas ◽  
...  

The traditional practice to assess accuracy in lidar data involves calculating RMSEz (root mean square error of the vertical component). Accuracy assessment of lidar point clouds in full 3D (three dimension) is not routinely performed. The main challenge in assessing accuracy in full 3D is how to identify a conjugate point of a ground-surveyed checkpoint in the lidar point cloud with the smallest possible uncertainty value. Relatively coarse point-spacing in airborne lidar data makes it challenging to determine a conjugate point accurately. As a result, a substantial unwanted error is added to the inherent positional uncertainty of the lidar data. Unless we keep this additional error small enough, the 3D accuracy assessment result will not properly represent the inherent uncertainty. We call this added error “external uncertainty,” which is associated with conjugate point identification. This research developed a general external uncertainty model using three-plane intersections and accounts for several factors (sensor precision, feature dimension, and point density). This method can be used for lidar point cloud data from a wide range of sensor qualities, point densities, and sizes of the features of interest. The external uncertainty model was derived as a semi-analytical function that takes the number of points on a plane as an input. It is a normalized general function that can be scaled by smooth surface precision (SSP) of a lidar system. This general uncertainty model provides a quantitative guideline on the required conditions for the conjugate point based on the geometric features. Applications of the external uncertainty model were demonstrated using various lidar point cloud data from the U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) library to determine the valid conditions for a conjugate point from three-plane modeling.


2020 ◽  
Vol 9 (2) ◽  
pp. 72 ◽  
Author(s):  
Sami El-Mahgary ◽  
Juho-Pekka Virtanen ◽  
Hannu Hyyppä

The importance of being able to separate the semantics from the actual (X,Y,Z) coordinates in a point cloud has been actively brought up in recent research. However, there is still no widely used or accepted data layout paradigm on how to efficiently store and manage such semantic point cloud data. In this paper, we present a simple data layout that makes use the semantics and that allows for quick queries. The underlying idea is especially suited for a programming approach (e.g., queries programmed via Python) but we also present an even simpler implementation of the underlying technique on a well known relational database management system (RDBMS), namely, PostgreSQL. The obtained query results suggest that the presented approach can be successfully used to handle point and range queries on large points clouds.


2020 ◽  
Vol 12 (11) ◽  
pp. 1800 ◽  
Author(s):  
Maarten Bassier ◽  
Maarten Vergauwen

The processing of remote sensing measurements to Building Information Modeling (BIM) is a popular subject in current literature. An important step in the process is the enrichment of the geometry with the topology of the wall observations to create a logical model. However, this remains an unsolved task as methods struggle to deal with the noise, incompleteness and the complexity of point cloud data of building scenes. Current methods impose severe abstractions such as Manhattan-world assumptions and single-story procedures to overcome these obstacles, but as a result, a general data processing approach is still missing. In this paper, we propose a method that solves these shortcomings and creates a logical BIM model in an unsupervised manner. More specifically, we propose a connection evaluation framework that takes as input a set of preprocessed point clouds of a building’s wall observations and compute the best fit topology between them. We transcend the current state of the art by processing point clouds of both straight, curved and polyline-based walls. Also, we consider multiple connection types in a novel reasoning framework that decides which operations are best fit to reconstruct the topology of the walls. The geometry and topology produced by our method is directly usable by BIM processes as it is structured conform the IFC data structure. The experimental results conducted on the Stanford 2D-3D-Semantics dataset (2D-3D-S) show that the proposed method is a promising framework to reconstruct complex multi-story wall elements in an unsupervised manner.


Sign in / Sign up

Export Citation Format

Share Document