scholarly journals A REGISTRATION METHOD OF POINT CLOUDS COLLECTED BY MOBILE LIDAR USING SOLELY STANDARD LAS FILES INFORMATION

Author(s):  
L. Gézero ◽  
C. Antunes

In the last few years, LiDAR sensors installed in terrestrial vehicles have been revealed as an efficient method to collect very dense 3D georeferenced information. The possibility of creating very dense point clouds representing the surface surrounding the sensor, at a given moment, in a very fast, detailed and easy way, shows the potential of this technology to be used for cartography and digital terrain models production in large scale. However, there are still some limitations associated with the use of this technology. When several acquisitions of the same area with the same device, are made, differences between the clouds can be observed. The range of that differences can go from few centimetres to some several tens of centimetres, mainly in urban and high vegetation areas where the occultation of the GNSS system introduces a degradation of the georeferenced trajectory. Along this article a different method point cloud registration is proposed. In addition to the efficiency and speed of execution, the main advantages of the method are related to the fact that the adjustment is continuously made over the trajectory, based on the GPS time. The process is fully automatic and only information recorded in the standard LAS files is used, without the need for any auxiliary information, in particular regarding the trajectory.

Author(s):  
L. Gézero ◽  
C. Antunes

The digital terrain models (DTM) assume an essential role in all types of road maintenance, water supply and sanitation projects. The demand of such information is more significant in developing countries, where the lack of infrastructures is higher. In recent years, the use of Mobile LiDAR Systems (MLS) proved to be a very efficient technique in the acquisition of precise and dense point clouds. These point clouds can be a solution to obtain the data for the production of DTM in remote areas, due mainly to the safety, precision, speed of acquisition and the detail of the information gathered. However, the point clouds filtering and algorithms to separate “terrain points” from “no terrain points”, quickly and consistently, remain a challenge that has caught the interest of researchers. This work presents a method to create the DTM from point clouds collected by MLS. The method is based in two interactive steps. The first step of the process allows reducing the cloud point to a set of points that represent the terrain’s shape, being the distance between points inversely proportional to the terrain variation. The second step is based on the Delaunay triangulation of the points resulting from the first step. The achieved results encourage a wider use of this technology as a solution for large scale DTM production in remote areas.


2019 ◽  
pp. 1175-1196
Author(s):  
Dion J. Wiseman ◽  
Jurjen van der Sluijs

Digital terrain models are invaluable datasets that are frequently used for visualizing, modeling, and analyzing Earth surface processes. Accurate models covering local scale landscape features are often very expensive and have poor temporal resolution. This research investigates the utility of UAV acquired imagery for generating high resolution terrain models and provides a detailed accuracy assessment according to recommended protocols. High resolution UAV imagery was acquired over a localized dune complex in southwestern Manitoba, Canada and two alternative workflows were evaluated for extracting point clouds. UAV-derived data points were then compared to reference data sets acquired using mapping grade GPS receivers and a total station. Results indicated that the UAV imagery was capable of producing dense point clouds and high resolution terrain models with mean errors as low as -0.15 m and RMSE values of 0.42 m depending on the resolution of the image dataset and workflow employed.


2020 ◽  
Vol 10 (4) ◽  
pp. 1275
Author(s):  
Zizhuang Wei ◽  
Yao Wang ◽  
Hongwei Yi ◽  
Yisong Chen ◽  
Guoping Wang

Semantic modeling is a challenging task that has received widespread attention in recent years. With the help of mini Unmanned Aerial Vehicles (UAVs), multi-view high-resolution aerial images of large-scale scenes can be conveniently collected. In this paper, we propose a semantic Multi-View Stereo (MVS) method to reconstruct 3D semantic models from 2D images. Firstly, 2D semantic probability distribution is obtained by Convolutional Neural Network (CNN). Secondly, the calibrated cameras poses are determined by Structure from Motion (SfM), while the depth maps are estimated by learning MVS. Combining 2D segmentation and 3D geometry information, dense point clouds with semantic labels are generated by a probability-based semantic fusion method. In the final stage, the coarse 3D semantic point cloud is optimized by both local and global refinements. By making full use of the multi-view consistency, the proposed method efficiently produces a fine-level 3D semantic point cloud. The experimental result evaluated by re-projection maps achieves 88.4% Pixel Accuracy on the Urban Drone Dataset (UDD). In conclusion, our graph-based semantic fusion procedure and refinement based on local and global information can suppress and reduce the re-projection error.


Author(s):  
Jian Wu ◽  
Qingxiong Yang

In this paper, we study the semantic segmentation of 3D LiDAR point cloud data in urban environments for autonomous driving, and a method utilizing the surface information of the ground plane was proposed. In practice, the resolution of a LiDAR sensor installed in a self-driving vehicle is relatively low and thus the acquired point cloud is indeed quite sparse. While recent work on dense point cloud segmentation has achieved promising results, the performance is relatively low when directly applied to sparse point clouds. This paper is focusing on semantic segmentation of the sparse point clouds obtained from 32-channel LiDAR sensor with deep neural networks. The main contribution is the integration of the ground information which is used to group ground points far away from each other. Qualitative and quantitative experiments on two large-scale point cloud datasets show that the proposed method outperforms the current state-of-the-art.


2015 ◽  
Vol 6 (3) ◽  
pp. 58-77 ◽  
Author(s):  
Dion J. Wiseman ◽  
Jurjen van der Sluijs

Digital terrain models are invaluable datasets that are frequently used for visualizing, modeling, and analyzing Earth surface processes. Accurate models covering local scale landscape features are often very expensive and have poor temporal resolution. This research investigates the utility of UAV acquired imagery for generating high resolution terrain models and provides a detailed accuracy assessment according to recommended protocols. High resolution UAV imagery was acquired over a localized dune complex in southwestern Manitoba, Canada and two alternative workflows were evaluated for extracting point clouds. UAV-derived data points were then compared to reference data sets acquired using mapping grade GPS receivers and a total station. Results indicated that the UAV imagery was capable of producing dense point clouds and high resolution terrain models with mean errors as low as -0.15 m and RMSE values of 0.42 m depending on the resolution of the image dataset and workflow employed.


2019 ◽  
pp. 249-270
Author(s):  
Dion J. Wiseman ◽  
Jurjen van der Sluijs

Digital terrain models are invaluable datasets that are frequently used for visualizing, modeling, and analyzing Earth surface processes. Accurate models covering local scale landscape features are often very expensive and have poor temporal resolution. This research investigates the utility of UAV acquired imagery for generating high resolution terrain models and provides a detailed accuracy assessment according to recommended protocols. High resolution UAV imagery was acquired over a localized dune complex in southwestern Manitoba, Canada and two alternative workflows were evaluated for extracting point clouds. UAV-derived data points were then compared to reference data sets acquired using mapping grade GPS receivers and a total station. Results indicated that the UAV imagery was capable of producing dense point clouds and high resolution terrain models with mean errors as low as -0.15 m and RMSE values of 0.42 m depending on the resolution of the image dataset and workflow employed.


2016 ◽  
Vol 10 (2) ◽  
pp. 163-171 ◽  
Author(s):  
Takuma Watanabe ◽  
◽  
Takeru Niwa ◽  
Hiroshi Masuda ◽  

We proposed a registration method for aligning short-range point-clouds captured using a portable laser scanner (PLS) to a large-scale point-cloud captured using a terrestrial laser scanner (TLS). As a PLS covers a very limited region, it often fails to provide sufficient features for registration. In our method, the system analyzes large-scale point-clouds captured using a TLS and indicates candidate regions to be measured using a PLS. When the user measures a suggested region, the system aligns the captured short-range point-cloud to the large-scale point-cloud. Our experiments show that the registration method can adequately align point-clouds captured using a TLS and a PLS.


2021 ◽  
Vol 13 (11) ◽  
pp. 2195
Author(s):  
Shiming Li ◽  
Xuming Ge ◽  
Shengfu Li ◽  
Bo Xu ◽  
Zhendong Wang

Today, mobile laser scanning and oblique photogrammetry are two standard urban remote sensing acquisition methods, and the cross-source point-cloud data obtained using these methods have significant differences and complementarity. Accurate co-registration can make up for the limitations of a single data source, but many existing registration methods face critical challenges. Therefore, in this paper, we propose a systematic incremental registration method that can successfully register MLS and photogrammetric point clouds in the presence of a large number of missing data, large variations in point density, and scale differences. The robustness of this method is due to its elimination of noise in the extracted linear features and its 2D incremental registration strategy. There are three main contributions of our work: (1) the development of an end-to-end automatic cross-source point-cloud registration method; (2) a way to effectively extract the linear feature and restore the scale; and (3) an incremental registration strategy that simplifies the complex registration process. The experimental results show that this method can successfully achieve cross-source data registration, while other methods have difficulty obtaining satisfactory registration results efficiently. Moreover, this method can be extended to more point-cloud sources.


2018 ◽  
Vol 7 (9) ◽  
pp. 342 ◽  
Author(s):  
Adam Salach ◽  
Krzysztof Bakuła ◽  
Magdalena Pilarska ◽  
Wojciech Ostrowski ◽  
Konrad Górski ◽  
...  

In this paper, the results of an experiment about the vertical accuracy of generated digital terrain models were assessed. The created models were based on two techniques: LiDAR and photogrammetry. The data were acquired using an ultralight laser scanner, which was dedicated to Unmanned Aerial Vehicle (UAV) platforms that provide very dense point clouds (180 points per square meter), and an RGB digital camera that collects data at very high resolution (a ground sampling distance of 2 cm). The vertical error of the digital terrain models (DTMs) was evaluated based on the surveying data measured in the field and compared to airborne laser scanning collected with a manned plane. The data were acquired in summer during a corridor flight mission over levees and their surroundings, where various types of land cover were observed. The experiment results showed unequivocally, that the terrain models obtained using LiDAR technology were more accurate. An attempt to assess the accuracy and possibilities of penetration of the point cloud from the image-based approach, whilst referring to various types of land cover, was conducted based on Real Time Kinematic Global Navigation Satellite System (GNSS-RTK) measurements and was compared to archival airborne laser scanning data. The vertical accuracy of DTM was evaluated for uncovered and vegetation areas separately, providing information about the influence of the vegetation height on the results of the bare ground extraction and DTM generation. In uncovered and low vegetation areas (0–20 cm), the vertical accuracies of digital terrain models generated from different data sources were quite similar: for the UAV Laser Scanning (ULS) data, the RMSE was 0.11 m, and for the image-based data collected using the UAV platform, it was 0.14 m, whereas for medium vegetation (higher than 60 cm), the RMSE from these two data sources were 0.11 m and 0.36 m, respectively. A decrease in the accuracy of 0.10 m, for every 20 cm of vegetation height, was observed for photogrammetric data; and such a dependency was not noticed in the case of models created from the ULS data.


Author(s):  
T. Fiolka ◽  
F. Rouatbi ◽  
D. Bender

3D terrain models are an important instrument in areas like geology, agriculture and reconnaissance. Using an automated UAS with a line-based LiDAR can create terrain models fast and easily even from large areas. But the resulting point cloud may contain holes and therefore be incomplete. This might happen due to occlusions, a missed flight route due to wind or simply as a result of changes in the ground height which would alter the swath of the LiDAR system. This paper proposes a method to detect holes in 3D point clouds generated during the flight and adjust the course in order to close them. First, a grid-based search for holes in the horizontal ground plane is performed. Then a check for vertical holes mainly created by buildings walls is done. Due to occlusions and steep LiDAR angles, closing the vertical gaps may be difficult or even impossible. Therefore, the current approach deals with holes in the ground plane and only marks the vertical holes in such a way that the operator can decide on further actions regarding them. The aim is to efficiently create point clouds which can be used for the generation of complete 3D terrain models.


Sign in / Sign up

Export Citation Format

Share Document