scholarly journals Concept and Performance Evaluation of a Novel UAV-Borne Topo-Bathymetric LiDAR Sensor

2020 ◽  
Vol 12 (6) ◽  
pp. 986 ◽  
Author(s):  
Gottfried Mandlburger ◽  
Martin Pfennigbauer ◽  
Roland Schwarz ◽  
Sebastian Flöry ◽  
Lukas Nussbaumer

We present the sensor concept and first performance and accuracy assessment results of a novel lightweight topo-bathymetric laser scanner designed for integration on Unmanned Aerial Vehicles (UAVs), light aircraft, and helicopters. The instrument is particularly well suited for capturing river bathymetry in high spatial resolution as a consequence of (i) the low nominal flying altitude of 50–150 m above ground level resulting in a laser footprint diameter on the ground of typically 10–30 cm and (ii) the high pulse repetition rate of up to 200 kHz yielding a point density on the ground of approximately 20–50 points/m2. The instrument features online waveform processing and additionally stores the full waveform within the entire range gate for waveform analysis in post-processing. The sensor was tested in a real-world environment by acquiring data from two freshwater ponds and a 500 m section of the pre-Alpine Pielach River (Lower Austria). The captured underwater points featured a maximum penetration of two times the Secchi depth. On dry land, the 3D point clouds exhibited (i) a measurement noise in the range of 1–3 mm; (ii) a fitting precision of redundantly captured flight strips of 1 cm; and (iii) an absolute accuracy of 2–3 cm compared to terrestrially surveyed checkerboard targets. A comparison of the refraction corrected LiDAR point cloud with independent underwater checkpoints exhibited a maximum deviation of 7.8 cm and revealed a systematic depth-dependent error when using a refraction coefficient of n = 1.36 for time-of-flight correction. The bias is attributed to multi-path effects in the turbid water column (Secchi depth: 1.1 m) caused by forward scattering of the laser signal at suspended particles. Due to the high spatial resolution, good depth performance, and accuracy, the sensor shows a high potential for applications in hydrology, fluvial morphology, and hydraulic engineering, including flood simulation, sediment transport modeling, and habitat mapping.

Author(s):  
T. Guo ◽  
A. Capra ◽  
M. Troyer ◽  
A. Gruen ◽  
A. J. Brooks ◽  
...  

Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.


Drones ◽  
2020 ◽  
Vol 4 (1) ◽  
pp. 6 ◽  
Author(s):  
Ryan G. Howell ◽  
Ryan R. Jensen ◽  
Steven L. Petersen ◽  
Randy T. Larsen

In situ measurements of sagebrush have traditionally been expensive and time consuming. Currently, improvements in small Unmanned Aerial Systems (sUAS) technology can be used to quantify sagebrush morphology and community structure with high resolution imagery on western rangelands, especially in sensitive habitat of the Greater sage-grouse (Centrocercus urophasianus). The emergence of photogrammetry algorithms to generate 3D point clouds from true color imagery can potentially increase the efficiency and accuracy of measuring shrub height in sage-grouse habitat. Our objective was to determine optimal parameters for measuring sagebrush height including flight altitude, single- vs. double- pass, and continuous vs. pause features. We acquired imagery using a DJI Mavic Pro 2 multi-rotor Unmanned Aerial Vehicle (UAV) equipped with an RGB camera, flown at 30.5, 45, 75, and 120 m and implementing single-pass and double-pass methods, using continuous flight and paused flight for each photo method. We generated a Digital Surface Model (DSM) from which we derived plant height, and then performed an accuracy assessment using on the ground measurements taken at the time of flight. We found high correlation between field measured heights and estimated heights, with a mean difference of approximately 10 cm (SE = 0.4 cm) and little variability in accuracy between flights with different heights and other parameters after statistical correction using linear regression. We conclude that higher altitude flights using a single-pass method are optimal to measure sagebrush height due to lower requirements in data storage and processing time.


Author(s):  
Xiaoliang Zou ◽  
Guihua Zhao ◽  
Jonathan Li ◽  
Yuanxi Yang ◽  
Yong Fang

With the rapid developments of the sensor technology, high spatial resolution imagery and airborne Lidar point clouds can be captured nowadays, which make classification, extraction, evaluation and analysis of a broad range of object features available. High resolution imagery, Lidar dataset and parcel map can be widely used for classification as information carriers. Therefore, refinement of objects classification is made possible for the urban land cover. The paper presents an approach to object based image analysis (OBIA) combing high spatial resolution imagery and airborne Lidar point clouds. The advanced workflow for urban land cover is designed with four components. Firstly, colour-infrared TrueOrtho photo and laser point clouds were pre-processed to derive the parcel map of water bodies and nDSM respectively. Secondly, image objects are created via multi-resolution image segmentation integrating scale parameter, the colour and shape properties with compactness criterion. Image can be subdivided into separate object regions. Thirdly, image objects classification is performed on the basis of segmentation and a rule set of knowledge decision tree. These objects imagery are classified into six classes such as water bodies, low vegetation/grass, tree, low building, high building and road. Finally, in order to assess the validity of the classification results for six classes, accuracy assessment is performed through comparing randomly distributed reference points of TrueOrtho imagery with the classification results, forming the confusion matrix and calculating overall accuracy and Kappa coefficient. The study area focuses on test site Vaihingen/Enz and a patch of test datasets comes from the benchmark of ISPRS WG III/4 test project. The classification results show higher overall accuracy for most types of urban land cover. Overall accuracy is 89.5% and Kappa coefficient equals to 0.865. The OBIA approach provides an effective and convenient way to combine high resolution imagery and Lidar ancillary data for classification of urban land cover.


Author(s):  
Xiaoliang Zou ◽  
Guihua Zhao ◽  
Jonathan Li ◽  
Yuanxi Yang ◽  
Yong Fang

Multispectral Lidar System can emit simultaneous laser pulses at the different wavelengths. The reflected multispectral energy is captured through a receiver of the sensor, and the return signal together with the position and orientation information of sensor is recorded. These recorded data are solved with GNSS/IMU data for further post-processing, forming high density multispectral 3D point clouds. As the first commercial multispectral airborne Lidar sensor, Optech Titan system is capable of collecting point clouds data from all three channels at 532nm visible (Green), at 1064 nm near infrared (NIR) and at 1550nm intermediate infrared (IR). It has become a new source of data for 3D land cover classification. The paper presents an Object Based Image Analysis (OBIA) approach to only use multispectral Lidar point clouds datasets for 3D land cover classification. The approach consists of three steps. Firstly, multispectral intensity images are segmented into image objects on the basis of multi-resolution segmentation integrating different scale parameters. Secondly, intensity objects are classified into nine categories by using the customized features of classification indexes and a combination the multispectral reflectance with the vertical distribution of object features. Finally, accuracy assessment is conducted via comparing random reference samples points from google imagery tiles with the classification results. The classification results show higher overall accuracy for most of the land cover types. Over 90% of overall accuracy is achieved via using multispectral Lidar point clouds for 3D land cover classification.


Drones ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 145
Author(s):  
Alessandra Capolupo

A proper classification of 3D point clouds allows fully exploiting data potentiality in assessing and preserving cultural heritage. Point cloud classification workflow is commonly based on the selection and extraction of respective geometric features. Although several research activities have investigated the impact of geometric features on classification outcomes accuracy, only a few works focused on their accuracy and reliability. This paper investigates the accuracy of 3D point cloud geometric features through a statistical analysis based on their corresponding eigenvalues and covariance with the aim of exploiting their effectiveness for cultural heritage classification. The proposed approach was separately applied on two high-quality 3D point clouds of the All Saints’ Monastery of Cuti (Bari, Southern Italy), generated using two competing survey techniques: Remotely Piloted Aircraft System (RPAS) Structure from Motion (SfM) and Multi View Stereo (MVS) techniques and Terrestrial Laser Scanner (TLS). Point cloud compatibility was guaranteed through re-alignment and co-registration of data. The geometric features accuracy obtained by adopting the RPAS digital photogrammetric and TLS models was consequently analyzed and presented. Lastly, a discussion on convergences and divergences of these results is also provided.


Author(s):  
Xiaoliang Zou ◽  
Guihua Zhao ◽  
Jonathan Li ◽  
Yuanxi Yang ◽  
Yong Fang

With the rapid developments of the sensor technology, high spatial resolution imagery and airborne Lidar point clouds can be captured nowadays, which make classification, extraction, evaluation and analysis of a broad range of object features available. High resolution imagery, Lidar dataset and parcel map can be widely used for classification as information carriers. Therefore, refinement of objects classification is made possible for the urban land cover. The paper presents an approach to object based image analysis (OBIA) combing high spatial resolution imagery and airborne Lidar point clouds. The advanced workflow for urban land cover is designed with four components. Firstly, colour-infrared TrueOrtho photo and laser point clouds were pre-processed to derive the parcel map of water bodies and nDSM respectively. Secondly, image objects are created via multi-resolution image segmentation integrating scale parameter, the colour and shape properties with compactness criterion. Image can be subdivided into separate object regions. Thirdly, image objects classification is performed on the basis of segmentation and a rule set of knowledge decision tree. These objects imagery are classified into six classes such as water bodies, low vegetation/grass, tree, low building, high building and road. Finally, in order to assess the validity of the classification results for six classes, accuracy assessment is performed through comparing randomly distributed reference points of TrueOrtho imagery with the classification results, forming the confusion matrix and calculating overall accuracy and Kappa coefficient. The study area focuses on test site Vaihingen/Enz and a patch of test datasets comes from the benchmark of ISPRS WG III/4 test project. The classification results show higher overall accuracy for most types of urban land cover. Overall accuracy is 89.5% and Kappa coefficient equals to 0.865. The OBIA approach provides an effective and convenient way to combine high resolution imagery and Lidar ancillary data for classification of urban land cover.


2022 ◽  
pp. 4195-4207
Author(s):  
Marwa Mohamed ◽  
Zahra Ezz El Din ◽  
Laila Qais

    A three-dimensional (3D) model extraction represents the best way to reflect the reality in all details. This explains the trends and tendency of many scientific disciplines towards making measurements, calculations and monitoring in various fields using such model. Although there are many ways to produce the 3D model like as images, integration techniques, and laser scanning, however, the quality of their products is not the same in terms of accuracy and detail. This article aims to assess the 3D point clouds model accuracy results from close range images and laser scan data based on Agi soft photoscan and cloud compare software to determine the compatibility of both datasets for several applications. College of Science, Departments of Mathematics and Computer in the University of Baghdad campus were exploited to create the proposed 3D model as this area location, which is one of the distinctive features of the university, allows making measurements freely from all sides. Results of this study supported by statistical analysis including 2 sample T-test and RMSE calculation in addition to visual comparison. Through this research, we note that the laser3D model provides many points in a short time, so it will reduce the field work and also its data is faster in processing to produce a reliable model of the scanned area compared with data derived from photogrammetry, then the difference were computed for all the reference points.


Author(s):  
T. Guo ◽  
A. Capra ◽  
M. Troyer ◽  
A. Gruen ◽  
A. J. Brooks ◽  
...  

Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.


Sign in / Sign up

Export Citation Format

Share Document