USING UNMANNED AERIAL SYSTEMS (UAS) TO PRODUCE POINT CLOUDS OF QUARRY WALLS FOR TESTING CHANGE DETECTION SOFTWARE AND THEIR APPLICATION TO STUDYING HIGHWAY ROCKFALL RATES

2019 ◽  
Author(s):  
Amanda R. Thomason ◽  
◽  
Samantha L. Farmer ◽  
Chester F. Watts ◽  
George C. Stephenson
Drones ◽  
2020 ◽  
Vol 4 (1) ◽  
pp. 6 ◽  
Author(s):  
Ryan G. Howell ◽  
Ryan R. Jensen ◽  
Steven L. Petersen ◽  
Randy T. Larsen

In situ measurements of sagebrush have traditionally been expensive and time consuming. Currently, improvements in small Unmanned Aerial Systems (sUAS) technology can be used to quantify sagebrush morphology and community structure with high resolution imagery on western rangelands, especially in sensitive habitat of the Greater sage-grouse (Centrocercus urophasianus). The emergence of photogrammetry algorithms to generate 3D point clouds from true color imagery can potentially increase the efficiency and accuracy of measuring shrub height in sage-grouse habitat. Our objective was to determine optimal parameters for measuring sagebrush height including flight altitude, single- vs. double- pass, and continuous vs. pause features. We acquired imagery using a DJI Mavic Pro 2 multi-rotor Unmanned Aerial Vehicle (UAV) equipped with an RGB camera, flown at 30.5, 45, 75, and 120 m and implementing single-pass and double-pass methods, using continuous flight and paused flight for each photo method. We generated a Digital Surface Model (DSM) from which we derived plant height, and then performed an accuracy assessment using on the ground measurements taken at the time of flight. We found high correlation between field measured heights and estimated heights, with a mean difference of approximately 10 cm (SE = 0.4 cm) and little variability in accuracy between flights with different heights and other parameters after statistical correction using linear regression. We conclude that higher altitude flights using a single-pass method are optimal to measure sagebrush height due to lower requirements in data storage and processing time.


2017 ◽  
Vol 38 (8-10) ◽  
pp. 2623-2638 ◽  
Author(s):  
S. N. Longmore ◽  
R. P. Collins ◽  
S. Pfeifer ◽  
S. E. Fox ◽  
M. Mulero-Pázmány ◽  
...  

Forests ◽  
2019 ◽  
Vol 10 (3) ◽  
pp. 284 ◽  
Author(s):  
Luke Wallace ◽  
Chris Bellman ◽  
Bryan Hally ◽  
Jaime Hernandez ◽  
Simon Jones ◽  
...  

Point clouds captured from Unmanned Aerial Systems are increasingly relied upon to provide information describing the structure of forests. The quality of the information derived from these point clouds is dependent on a range of variables, including the type and structure of the forest, weather conditions and flying parameters. A key requirement to achieve accurate estimates of height based metrics describing forest structure is a source of ground information. This study explores the availability and reliability of ground surface points available within point clouds captured in six forests of different structure (canopy cover and height), using three image capture and processing strategies, consisting of nadir, oblique and composite nadir/oblique image networks. The ground information was extracted through manual segmentation of the point clouds as well as through the use of two commonly used ground filters, LAStools lasground and the Cloth Simulation Filter. The outcomes of these strategies were assessed against ground control captured with a Total Station. Results indicate that a small increase in the number of ground points captured (between 0 and 5% of a 10 m radius plot) can be achieved through the use of a composite image network. In the case of manually identified ground points, this reduced the root mean square error (RMSE) error of the terrain model by between 1 and 11 cm, with greater reductions seen in plots with high canopy cover. The ground filters trialled were not able to exploit the extra information in the point clouds and inconsistent results in terrain RMSE were obtained across the various plots and imaging network configurations. The use of a composite network also provided greater penetration into the canopy, which is likely to improve the representation of mid-canopy elements.


2019 ◽  
Vol 11 (1) ◽  
pp. 84 ◽  
Author(s):  
Alexander Graham ◽  
Nicholas Coops ◽  
Michael Wilcox ◽  
Andrew Plowright

Detailed vertical forest structure information can be remotely sensed by combining technologies of unmanned aerial systems (UAS) and digital aerial photogrammetry (DAP). A key limitation in the application of DAP methods, however, is the inability to produce accurate digital elevation models (DEM) in areas of dense vegetation. This study investigates the terrain modeling potential of UAS-DAP methods within a temperate conifer forest in British Columbia, Canada. UAS-acquired images were photogrammetrically processed to produce high-resolution DAP point clouds. To evaluate the terrain modeling ability of DAP, first, a sensitivity analysis was conducted to estimate optimal parameters of three ground-point classification algorithms designed for airborne laser scanning (ALS). Algorithms tested include progressive triangulated irregular network (TIN) densification (PTD), hierarchical robust interpolation (HRI) and simple progressive morphological filtering (SMRF). Points were classified as ground from the ALS and served as ground-truth data to which UAS-DAP derived DEMs were compared. The proportion of area with root mean square error (RMSE) <1.5 m were 56.5%, 51.6% and 52.3% for the PTD, HRI and SMRF methods respectively. To assess the influence of terrain slope and canopy cover, error values of DAP-DEMs produced using optimal parameters were compared to stratified classes of canopy cover and slope generated from ALS point clouds. Results indicate that canopy cover was approximately three times more influential on RMSE than terrain slope.


Author(s):  
I. S. G. Campos

Abstract. In this paper I present a new MAVLink command, enabling oblique aerial surveys, along with its implementation on the major open source flight stacks (PX4 and ArduPilot) and ground control station (QGroundControl). A key advantage of this approach is that it enables vehicles with a typical gimbaled camera to capture oblique photos in the same pass as nadir photos, without the need for heavier and more expensive alternatives that feature multiple cameras, at fixed angles in a rigid mount, thus are unsuitable for lightweight platforms. It also allows for flexibility in the configuration of the camera angles. The principle is quite simple, the command combines camera triggering with mount actuation in a synchronized cycle along the flight traverses through the region of interest. Oblique photos have also been shown to increase the accuracy of data and help filling holes in point clouds and related outputs of surveys with vertical components. To provide evidence of its benefits, I compare the results of several missions, in simulated and field experiments, flown with nadir only surveys versus oblique surveys, and different camera configurations. In both cases, ground control and check points were used to evaluate the accuracy of the surveys. The field experiments show the vehicle had to fly 44% less with the oblique survey to cover the same area as the nadir survey, which could translate in a 80% gain in efficiency in coverage area per flight. Furthermore, this new command is capable of enhancing functionality of Unmanned Aerial Systems (UASs) without any additional hardware, therefore its adoption should be straightforward.


Author(s):  
S. Ostrowski ◽  
G. Jóźków ◽  
C. Toth ◽  
B. Vander Jagt

Unmanned Aerial Systems (UAS) allow for the collection of low altitude aerial images, along with other geospatial information from a variety of companion sensors. The images can then be processed using sophisticated algorithms from the Computer Vision (CV) field, guided by the traditional and established procedures from photogrammetry. Based on highly overlapped images, new software packages which were specifically developed for UAS technology can easily create ground models, such as Point Clouds (PC), Digital Surface Model (DSM), orthoimages, etc. The goal of this study is to compare the performance of three different software packages, focusing on the accuracy of the 3D products they produce. Using a Nikon D800 camera installed on an ocotocopter UAS platform, images were collected during subsequent field tests conducted over the Olentangy River, north from the Ohio State University campus. Two areas around bike bridges on the Olentangy River Trail were selected because of the challenge the packages would have in creating accurate products; matching pixels over the river and dense canopy on the shore presents difficult scenarios to model. Ground Control Points (GCP) were gathered at each site to tie the models to a local coordinate system and help assess the absolute accuracy for each package. In addition, the models were also relatively compared to each other using their PCs.


Author(s):  
R. C. Anderson ◽  
P. C. Shanks ◽  
L. A. Kritis ◽  
M. G. Trani

We describe several remote sensing research projects supported with small Unmanned Aerial Systems (sUAS) operated by the NGA Basic and Applied Research Office. These sUAS collections provide data supporting Small Business Innovative Research (SBIR), NGA University Research Initiative (NURI), and Cooperative Research And Development Agreements (CRADA) efforts in addition to inhouse research. Some preliminary results related to 3D electro-optical point clouds are presented, and some research goals discussed. Additional details related to the autonomous operational mode of both our multi-rotor and fixed wing small Unmanned Aerial System (sUAS) platforms are presented.


2020 ◽  
Vol 8 (1) ◽  
pp. 57-74 ◽  
Author(s):  
Orrin Thomas ◽  
Christian Stallings ◽  
Benjamin Wilkinson

Structure from motion (SfM) and imagery-derived point clouds (IDPC) are excellent tools for collecting spatial data. However, reported accuracies from unmanned aerial systems (UAS) commonly fall short of their theoretical potential. The research presented here, using a DJI Inspire 2 with post-processed kinematic direct geopositioning, demonstrates that UAS mapping can be consistently accurate enough for use in place of, or in concert with, terrestrial methods (2 cm vertical root mean squared error). We further demonstrate that features that are missing or distorted in IDPC (e.g., roof edges, break lines, and above-ground utilities) can be collected from UAS-imagery stereo models with similar accuracy. Accuracy in the experiments was verified by comparison to data from a total station and terrestrial laser scanner (TLS). Use of the recommended hardware and stereo compilation reduced mapping costs by 40%–75% on three test projects.


2017 ◽  
Author(s):  
Francesco Avanzi ◽  
Alberto Bianchi ◽  
Alberto Cina ◽  
Carlo De Michele ◽  
Paolo Maschio ◽  
...  

Abstract. Photogrammetric surveys using Unmanned Aerial Systems (UAS) may represent an alternative to existing methods for measuring the distribution of snow, but additional efforts are still needed to establish this technique as a low-cost, yet precise tool. Importantly, existing works have mainly used sparse evaluation datasets that limit the insight into UAS performance at high spatial resolutions. Here, we compare a UAS-based photogrammetric map of snow depth with data acquired with a MultiStation and with manual probing over a sample plot. The relatively high density of manual data (135 pt over 6700 m2, i.e., 2 pt/100 m2) enables to assess the performance of UAS in capturing the marked spatial variability of snow. The use of a MultiStation, which exploits a scanning principle, also enables to compare UAS data on snow with a frequently used instrument in high-resolution applications. Results show that the Root Mean Square Error (RMSE) between UAS and MultiStation data on snow is equal to 0.036 m when comparing the two point clouds. A large fraction of this difference may be, however, due to spurious differences between datasets due to simultaneous snowmelt, as the RMSE on bare soil is equal to 0.02 m. When comparing UAS data with manual probing, the RMSE is equal to 0.31 m, whereas the median difference is equal to 0.12 m. The statistics significantly decrease up to RMSE = 0.17 m when excluding areas of likely water accumulation in snow and ice layers. These results suggest that UAS represent a competitive choice among existing techniques for high-precision, high-resolution remote sensing of snow.


Sign in / Sign up

Export Citation Format

Share Document