scholarly journals PERFORMANCE EVALUATION OF sUAS EQUIPPED WITH VELODYNE HDL-32E LiDAR SENSOR

Author(s):  
G. Jozkow ◽  
P. Wieczorek ◽  
M. Karpina ◽  
A. Walicka ◽  
A. Borkowski

The Velodyne HDL-32E laser scanner is used more frequently as main mapping sensor in small commercial UASs. However, there is still little information about the actual accuracy of point clouds collected with such UASs. This work evaluates empirically the accuracy of the point cloud collected with such UAS. Accuracy assessment was conducted in four aspects: impact of sensors on theoretical point cloud accuracy, trajectory reconstruction quality, and internal and absolute point cloud accuracies. Theoretical point cloud accuracy was evaluated by calculating 3D position error knowing errors of used sensors. The quality of trajectory reconstruction was assessed by comparing position and attitude differences from forward and reverse EKF solution. Internal and absolute accuracies were evaluated by fitting planes to 8 point cloud samples extracted for planar surfaces. In addition, the absolute accuracy was also determined by calculating point 3D distances between LiDAR UAS and reference TLS point clouds. Test data consisted of point clouds collected in two separate flights performed over the same area. Executed experiments showed that in tested UAS, the trajectory reconstruction, especially attitude, has significant impact on point cloud accuracy. Estimated absolute accuracy of point clouds collected during both test flights was better than 10 cm, thus investigated UAS fits mapping-grade category.

Author(s):  
G. Jozkow ◽  
C. Toth ◽  
D. Grejner-Brzezinska

Unmanned Aerial System (UAS) technology is nowadays willingly used in small area topographic mapping due to low costs and good quality of derived products. Since cameras typically used with UAS have some limitations, e.g. cannot penetrate the vegetation, LiDAR sensors are increasingly getting attention in UAS mapping. Sensor developments reached the point when their costs and size suit the UAS platform, though, LiDAR UAS is still an emerging technology. One issue related to using LiDAR sensors on UAS is the limited performance of the navigation sensors used on UAS platforms. Therefore, various hardware and software solutions are investigated to increase the quality of UAS LiDAR point clouds. This work analyses several aspects of the UAS LiDAR point cloud generation performance based on UAS flights conducted with the Velodyne laser scanner and cameras. The attention was primarily paid to the trajectory reconstruction performance that is essential for accurate point cloud georeferencing. Since the navigation sensors, especially Inertial Measurement Units (IMUs), may not be of sufficient performance, the estimated camera poses could allow to increase the robustness of the estimated trajectory, and subsequently, the accuracy of the point cloud. The accuracy of the final UAS LiDAR point cloud was evaluated on the basis of the generated DSM, including comparison with point clouds obtained from dense image matching. The results showed the need for more investigation on MEMS IMU sensors used for UAS trajectory reconstruction. The accuracy of the UAS LiDAR point cloud, though lower than for point cloud obtained from images, may be still sufficient for certain mapping applications where the optical imagery is not useful.


Author(s):  
G. Jozkow ◽  
C. Toth ◽  
D. Grejner-Brzezinska

Unmanned Aerial System (UAS) technology is nowadays willingly used in small area topographic mapping due to low costs and good quality of derived products. Since cameras typically used with UAS have some limitations, e.g. cannot penetrate the vegetation, LiDAR sensors are increasingly getting attention in UAS mapping. Sensor developments reached the point when their costs and size suit the UAS platform, though, LiDAR UAS is still an emerging technology. One issue related to using LiDAR sensors on UAS is the limited performance of the navigation sensors used on UAS platforms. Therefore, various hardware and software solutions are investigated to increase the quality of UAS LiDAR point clouds. This work analyses several aspects of the UAS LiDAR point cloud generation performance based on UAS flights conducted with the Velodyne laser scanner and cameras. The attention was primarily paid to the trajectory reconstruction performance that is essential for accurate point cloud georeferencing. Since the navigation sensors, especially Inertial Measurement Units (IMUs), may not be of sufficient performance, the estimated camera poses could allow to increase the robustness of the estimated trajectory, and subsequently, the accuracy of the point cloud. The accuracy of the final UAS LiDAR point cloud was evaluated on the basis of the generated DSM, including comparison with point clouds obtained from dense image matching. The results showed the need for more investigation on MEMS IMU sensors used for UAS trajectory reconstruction. The accuracy of the UAS LiDAR point cloud, though lower than for point cloud obtained from images, may be still sufficient for certain mapping applications where the optical imagery is not useful.


Author(s):  
F. Alidoost ◽  
H. Arefi

Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.


Author(s):  
Beril Sirmacek ◽  
Yueqian Shen ◽  
Roderik Lindenbergh ◽  
Sisi Zlatanova ◽  
Abdoulaye Diakite

We present a comparison of point cloud generation and quality of data acquired by Zebedee (Zeb1) and Leica C10 devices which are used in the same building interior. Both sensor devices come with different practical and technical advantages. As it could be expected, these advantages come with some drawbacks. Therefore, depending on the requirements of the project, it is important to have a vision about what to expect from different sensors. In this paper, we provide a detailed analysis of the point clouds of the same room interior acquired from Zeb1 and Leica C10 sensors. First, it is visually assessed how different features appear in both the Zeb1 and Leica C10 point clouds. Next, a quantitative analysis is given by comparing local point density, local noise level and stability of local normals. Finally, a simple 3D room plan is extracted from both the Zeb1 and the Leica C10 point clouds and the lengths of constructed line segments connecting corners of the room are compared. The results show that Zeb1 is far superior in ease of data acquisition. No heavy handling, hardly no measurement planning and no point cloud registration is required from the operator. The resulting point cloud has a quality in the order of centimeters, which is fine for generating a 3D interior model of a building. Our results also clearly show that fine details of for example ornaments are invisible in the Zeb1 data. If point clouds with a quality in the order of millimeters are required, still a high-end laser scanner like the Leica C10 is required, in combination with a more sophisticated, time-consuming and elaborative data acquisition and processing approach.


Author(s):  
F. He ◽  
A. Habib ◽  
A. Al-Rawabdeh

In this paper, we proposed a new refinement procedure for the semi-global dense image matching. In order to remove outliers and improve the disparity image derived from the semi-global algorithm, both the local smoothness constraint and point cloud segments are utilized. Compared with current refinement technique, which usually assumes the correspondences between planar surfaces and 2D image segments, our proposed approach can effectively deal with object with both planar and curved surfaces. Meanwhile, since 3D point clouds contain more precise geometric information regarding to the reconstructed objects, the planar surfaces identified in our approach can be more accurate. In order to illustrate the feasibility of our approach, several experimental tests are conducted on both Middlebury test and real UAV-image datasets. The results demonstrate that our approach has a good performance on improving the quality of the derived dense image-based point cloud.


Author(s):  
Beril Sirmacek ◽  
Yueqian Shen ◽  
Roderik Lindenbergh ◽  
Sisi Zlatanova ◽  
Abdoulaye Diakite

We present a comparison of point cloud generation and quality of data acquired by Zebedee (Zeb1) and Leica C10 devices which are used in the same building interior. Both sensor devices come with different practical and technical advantages. As it could be expected, these advantages come with some drawbacks. Therefore, depending on the requirements of the project, it is important to have a vision about what to expect from different sensors. In this paper, we provide a detailed analysis of the point clouds of the same room interior acquired from Zeb1 and Leica C10 sensors. First, it is visually assessed how different features appear in both the Zeb1 and Leica C10 point clouds. Next, a quantitative analysis is given by comparing local point density, local noise level and stability of local normals. Finally, a simple 3D room plan is extracted from both the Zeb1 and the Leica C10 point clouds and the lengths of constructed line segments connecting corners of the room are compared. The results show that Zeb1 is far superior in ease of data acquisition. No heavy handling, hardly no measurement planning and no point cloud registration is required from the operator. The resulting point cloud has a quality in the order of centimeters, which is fine for generating a 3D interior model of a building. Our results also clearly show that fine details of for example ornaments are invisible in the Zeb1 data. If point clouds with a quality in the order of millimeters are required, still a high-end laser scanner like the Leica C10 is required, in combination with a more sophisticated, time-consuming and elaborative data acquisition and processing approach.


Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 700 ◽  
Author(s):  
Anna Fryskowska

Three-dimensional (3D) mapping of power lines is very important for power line inspection. Many remotely-sensed data products like light detection and ranging (LiDAR) have been already studied for power line surveys. More and more data are being obtained via photogrammetric measurements. This increases the need for the implementation of advanced processing techniques. In recent years, there have been several developments in visualisation techniques using UAV (unmanned aerial vehicle) platform photography. The most modern of such imaging systems have the ability to generate dense point clouds. However, image-based point cloud accuracy is very often various (unstable) and dependent on the radiometric quality of images and the efficiency of image processing algorithms. The main factor influencing the point cloud quality is noise. Such problems usually arise with data obtained via low-cost UAV platforms. Therefore, generated point clouds representing power lines are usually incomplete and noisy. To obtain a complete and accurate 3D model of power lines and towers, it is necessary to develop improved data processing algorithms. The experiment tested the algorithms on power lines with different voltages. This paper presents the wavelet-based method of processing data acquired with a low-cost UAV camera. The proposed, original method involves the application of algorithms for coarse filtration and precise filtering. In addition, a new way of calculating the recommended flight height was proposed. At the end, the accuracy assessment of this two-stage filtration process was examined. For this, point quality indices were proposed. The experimental results show that the proposed algorithm improves the quality of low-cost point clouds. The proposed methods improve the accuracy of determining the parameters of the lines by more than twice. About 10% of noise is reduced by using the wavelet-based approach.


Drones ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 145
Author(s):  
Alessandra Capolupo

A proper classification of 3D point clouds allows fully exploiting data potentiality in assessing and preserving cultural heritage. Point cloud classification workflow is commonly based on the selection and extraction of respective geometric features. Although several research activities have investigated the impact of geometric features on classification outcomes accuracy, only a few works focused on their accuracy and reliability. This paper investigates the accuracy of 3D point cloud geometric features through a statistical analysis based on their corresponding eigenvalues and covariance with the aim of exploiting their effectiveness for cultural heritage classification. The proposed approach was separately applied on two high-quality 3D point clouds of the All Saints’ Monastery of Cuti (Bari, Southern Italy), generated using two competing survey techniques: Remotely Piloted Aircraft System (RPAS) Structure from Motion (SfM) and Multi View Stereo (MVS) techniques and Terrestrial Laser Scanner (TLS). Point cloud compatibility was guaranteed through re-alignment and co-registration of data. The geometric features accuracy obtained by adopting the RPAS digital photogrammetric and TLS models was consequently analyzed and presented. Lastly, a discussion on convergences and divergences of these results is also provided.


2021 ◽  
Vol 13 (13) ◽  
pp. 2494
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

T-splines have recently been introduced to represent objects of arbitrary shapes using a smaller number of control points than the conventional non-uniform rational B-splines (NURBS) or B-spline representatizons in computer-aided design, computer graphics and reverse engineering. They are flexible in representing complex surface shapes and economic in terms of parameters as they enable local refinement. This property is a great advantage when dense, scattered and noisy point clouds are approximated using least squares fitting, such as those from a terrestrial laser scanner (TLS). Unfortunately, when it comes to assessing the goodness of fit of the surface approximation with a real dataset, only a noisy point cloud can be approximated: (i) a low root mean squared error (RMSE) can be linked with an overfitting, i.e., a fitting of the noise, and should be correspondingly avoided, and (ii) a high RMSE is synonymous with a lack of details. To address the challenge of judging the approximation, the reference surface should be entirely known: this can be solved by printing a mathematically defined T-splines reference surface in three dimensions (3D) and modeling the artefacts induced by the 3D printing. Once scanned under different configurations, it is possible to assess the goodness of fit of the approximation for a noisy and potentially gappy point cloud and compare it with the traditional but less flexible NURBS. The advantages of T-splines local refinement open the door for further applications within a geodetic context such as rigorous statistical testing of deformation. Two different scans from a slightly deformed object were approximated; we found that more than 40% of the computational time could be saved without affecting the goodness of fit of the surface approximation by using the same mesh for the two epochs.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 201
Author(s):  
Michael Bekele Maru ◽  
Donghwan Lee ◽  
Kassahun Demissie Tola ◽  
Seunghee Park

Modeling a structure in the virtual world using three-dimensional (3D) information enhances our understanding, while also aiding in the visualization, of how a structure reacts to any disturbance. Generally, 3D point clouds are used for determining structural behavioral changes. Light detection and ranging (LiDAR) is one of the crucial ways by which a 3D point cloud dataset can be generated. Additionally, 3D cameras are commonly used to develop a point cloud containing many points on the external surface of an object around it. The main objective of this study was to compare the performance of optical sensors, namely a depth camera (DC) and terrestrial laser scanner (TLS) in estimating structural deflection. We also utilized bilateral filtering techniques, which are commonly used in image processing, on the point cloud data for enhancing their accuracy and increasing the application prospects of these sensors in structure health monitoring. The results from these sensors were validated by comparing them with the outputs from a linear variable differential transformer sensor, which was mounted on the beam during an indoor experiment. The results showed that the datasets obtained from both the sensors were acceptable for nominal deflections of 3 mm and above because the error range was less than ±10%. However, the result obtained from the TLS were better than those obtained from the DC.


Sign in / Sign up

Export Citation Format

Share Document