scholarly journals Measuring Height Characteristics of Sagebrush (Artemisia sp.) Using Imagery Derived from Small Unmanned Aerial Systems (sUAS)

Drones ◽  
2020 ◽  
Vol 4 (1) ◽  
pp. 6 ◽  
Author(s):  
Ryan G. Howell ◽  
Ryan R. Jensen ◽  
Steven L. Petersen ◽  
Randy T. Larsen

In situ measurements of sagebrush have traditionally been expensive and time consuming. Currently, improvements in small Unmanned Aerial Systems (sUAS) technology can be used to quantify sagebrush morphology and community structure with high resolution imagery on western rangelands, especially in sensitive habitat of the Greater sage-grouse (Centrocercus urophasianus). The emergence of photogrammetry algorithms to generate 3D point clouds from true color imagery can potentially increase the efficiency and accuracy of measuring shrub height in sage-grouse habitat. Our objective was to determine optimal parameters for measuring sagebrush height including flight altitude, single- vs. double- pass, and continuous vs. pause features. We acquired imagery using a DJI Mavic Pro 2 multi-rotor Unmanned Aerial Vehicle (UAV) equipped with an RGB camera, flown at 30.5, 45, 75, and 120 m and implementing single-pass and double-pass methods, using continuous flight and paused flight for each photo method. We generated a Digital Surface Model (DSM) from which we derived plant height, and then performed an accuracy assessment using on the ground measurements taken at the time of flight. We found high correlation between field measured heights and estimated heights, with a mean difference of approximately 10 cm (SE = 0.4 cm) and little variability in accuracy between flights with different heights and other parameters after statistical correction using linear regression. We conclude that higher altitude flights using a single-pass method are optimal to measure sagebrush height due to lower requirements in data storage and processing time.

Author(s):  
S. Ostrowski ◽  
G. Jóźków ◽  
C. Toth ◽  
B. Vander Jagt

Unmanned Aerial Systems (UAS) allow for the collection of low altitude aerial images, along with other geospatial information from a variety of companion sensors. The images can then be processed using sophisticated algorithms from the Computer Vision (CV) field, guided by the traditional and established procedures from photogrammetry. Based on highly overlapped images, new software packages which were specifically developed for UAS technology can easily create ground models, such as Point Clouds (PC), Digital Surface Model (DSM), orthoimages, etc. The goal of this study is to compare the performance of three different software packages, focusing on the accuracy of the 3D products they produce. Using a Nikon D800 camera installed on an ocotocopter UAS platform, images were collected during subsequent field tests conducted over the Olentangy River, north from the Ohio State University campus. Two areas around bike bridges on the Olentangy River Trail were selected because of the challenge the packages would have in creating accurate products; matching pixels over the river and dense canopy on the shore presents difficult scenarios to model. Ground Control Points (GCP) were gathered at each site to tie the models to a local coordinate system and help assess the absolute accuracy for each package. In addition, the models were also relatively compared to each other using their PCs.


Author(s):  
T. Guo ◽  
A. Capra ◽  
M. Troyer ◽  
A. Gruen ◽  
A. J. Brooks ◽  
...  

Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.


Sensors ◽  
2018 ◽  
Vol 18 (7) ◽  
pp. 2245 ◽  
Author(s):  
Karel Kuželka ◽  
Peter Surový

We evaluated two unmanned aerial systems (UASs), namely the DJI Phantom 4 Pro and DJI Mavic Pro, for 3D forest structure mapping of the forest stand interior with the use of close-range photogrammetry techniques. Assisted flights were performed within two research plots established in mature pure Norway spruce (Picea abies (L.) H. Karst.) and European beech (Fagus sylvatica L.) forest stands. Geotagged images were used to produce georeferenced 3D point clouds representing tree stem surfaces. With a flight height of 8 m above the ground, the stems were precisely modeled up to a height of 10 m, which represents a considerably larger portion of the stem when compared with terrestrial close-range photogrammetry. Accuracy of the point clouds was evaluated by comparing field-measured tree diameters at breast height (DBH) with diameter estimates derived from the point cloud using four different fitting methods, including the bounding circle, convex hull, least squares circle, and least squares ellipse methods. The accuracy of DBH estimation varied with the UAS model and the diameter fitting method utilized. With the Phantom 4 Pro and the least squares ellipse method to estimate diameter, the mean error of diameter estimates was −1.17 cm (−3.14%) and 0.27 cm (0.69%) for spruce and beech stands, respectively.


2019 ◽  
Vol 11 (10) ◽  
pp. 1204 ◽  
Author(s):  
Yue Pan ◽  
Yiqing Dong ◽  
Dalei Wang ◽  
Airong Chen ◽  
Zhen Ye

Three-dimensional (3D) digital technology is essential to the maintenance and monitoring of cultural heritage sites. In the field of bridge engineering, 3D models generated from point clouds of existing bridges is drawing increasing attention. Currently, the widespread use of the unmanned aerial vehicle (UAV) provides a practical solution for generating 3D point clouds as well as models, which can drastically reduce the manual effort and cost involved. In this study, we present a semi-automated framework for generating structural surface models of heritage bridges. To be specific, we propose to tackle this challenge via a novel top-down method for segmenting main bridge components, combined with rule-based classification, to produce labeled 3D models from UAV photogrammetric point clouds. The point clouds of the heritage bridge are generated from the captured UAV images through the structure-from-motion workflow. A segmentation method is developed based on the supervoxel structure and global graph optimization, which can effectively separate bridge components based on geometric features. Then, recognition by the use of a classification tree and bridge geometry is utilized to recognize different structural elements from the obtained segments. Finally, surface modeling is conducted to generate surface models of the recognized elements. Experiments using two bridges in China demonstrate the potential of the presented structural model reconstruction method using UAV photogrammetry and point cloud processing in 3D digital documentation of heritage bridges. By using given markers, the reconstruction error of point clouds can be as small as 0.4%. Moreover, the precision and recall of segmentation results using testing date are better than 0.8, and a recognition accuracy better than 0.8 is achieved.


Author(s):  
J. Li-Chee-Ming ◽  
C. Armenakis

This paper presents the ongoing development of a small unmanned aerial mapping system (sUAMS) that in the future will track its trajectory and perform 3D mapping in near-real time. As both mapping and tracking algorithms require powerful computational capabilities and large data storage facilities, we propose to use the RoboEarth Cloud Engine (RCE) to offload heavy computation and store data to secure computing environments in the cloud. While the RCE's capabilities have been demonstrated with terrestrial robots in indoor environments, this paper explores the feasibility of using the RCE in mapping and tracking applications in outdoor environments by small UAMS. <br><br> The experiments presented in this work assess the data processing strategies and evaluate the attainable tracking and mapping accuracies using the data obtained by the sUAMS. Testing was performed with an Aeryon Scout quadcopter. It flew over York University, up to approximately 40 metres above the ground. The quadcopter was equipped with a single-frequency GPS receiver providing positioning to about 3 meter accuracies, an AHRS (Attitude and Heading Reference System) estimating the attitude to about 3 degrees, and an FPV (First Person Viewing) camera. Video images captured from the onboard camera were processed using VisualSFM and SURE, which are being reformed as an Application-as-a-Service via the RCE. The 3D virtual building model of York University was used as a known environment to georeference the point cloud generated from the sUAMS' sensor data. The estimated position and orientation parameters of the video camera show increases in accuracy when compared to the sUAMS' autopilot solution, derived from the onboard GPS and AHRS. The paper presents the proposed approach and the results, along with their accuracies.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Monica Herrero-Huerta ◽  
Alexander Bucksch ◽  
Eetu Puttonen ◽  
Katy M. Rainey

Cost-effective phenotyping methods are urgently needed to advance crop genetics in order to meet the food, fuel, and fiber demands of the coming decades. Concretely, characterizing plot level traits in fields is of particular interest. Recent developments in high-resolution imaging sensors for UAS (unmanned aerial systems) focused on collecting detailed phenotypic measurements are a potential solution. We introduce canopy roughness as a new plant plot-level trait. We tested its usability with soybean by optical data collected from UAS to estimate biomass. We validate canopy roughness on a panel of 108 soybean [Glycine max (L.) Merr.] recombinant inbred lines in a multienvironment trial during the R2 growth stage. A senseFly eBee UAS platform obtained aerial images with a senseFly S.O.D.A. compact digital camera. Using a structure from motion (SfM) technique, we reconstructed 3D point clouds of the soybean experiment. A novel pipeline for feature extraction was developed to compute canopy roughness from point clouds. We used regression analysis to correlate canopy roughness with field-measured aboveground biomass (AGB) with a leave-one-out cross-validation. Overall, our models achieved a coefficient of determination (R2) greater than 0.5 in all trials. Moreover, we found that canopy roughness has the ability to discern AGB variations among different genotypes. Our test trials demonstrate the potential of canopy roughness as a reliable trait for high-throughput phenotyping to estimate AGB. As such, canopy roughness provides practical information to breeders in order to select phenotypes on the basis of UAS data.


2018 ◽  
Vol 10 (9) ◽  
pp. 1345 ◽  
Author(s):  
Kotaro Iizuka ◽  
Kazuo Watanabe ◽  
Tsuyoshi Kato ◽  
Niken Putri ◽  
Sisva Silsigia ◽  
...  

The high demand for unmanned aerial systems (UASs) reflects the notable impact that these systems have had on the remote sensing field in recent years. Such systems can be used to discover new findings and develop strategic plans in related scientific fields. In this work, a case study is performed to describe a novel approach that uses a UAS with two different sensors and assesses the possibility of monitoring peatland in a small area of a plantation forest in West Kalimantan, Indonesia. First, a multicopter drone with an onboard camera was used to collect aerial images of the study area. The structure from motion (SfM) method was implemented to generate a mosaic image. A digital surface model (DSM) and digital terrain model (DTM) were used to compute a canopy height model (CHM) and explore the vegetation height. Second, a multicopter drone combined with a thermal infrared camera (Zenmuse-XT) was utilized to collect both spatial and temporal thermal data from the study area. The temperature is an important factor that controls the oxidation of tropical peats by microorganisms, root respiration, the soil water content, and so forth. In turn, these processes can alter the greenhouse gas (GHG) flux in the area. Using principal component analysis (PCA), the thermal data were processed to visualize the thermal characteristics of the study site, and the PCA successfully extracted different feature areas. The trends in the thermal information clearly show the differences among land cover types, and the heating and cooling of the peat varies throughout the study area. This study shows the potential for using UAS thermal remote sensing to interpret the characteristics of thermal trends in peatland environments, and the proposed method can be used to guide strategical approaches for monitoring the peatlands in Indonesia.


Author(s):  
F. Alidoost ◽  
H. Arefi

Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.


Author(s):  
G. Stavropoulou ◽  
G. Tzovla ◽  
A. Georgopoulos

Over the past decade, large-scale photogrammetric products have been extensively used for the geometric documentation of cultural heritage monuments, as they combine metric information with the qualities of an image document. Additionally, the rising technology of terrestrial laser scanning has enabled the easier and faster production of accurate digital surface models (DSM), which have in turn contributed to the documentation of heavily textured monuments. However, due to the required accuracy of control points, the photogrammetric methods are always applied in combination with surveying measurements and hence are dependent on them. Along this line of thought, this paper explores the possibility of limiting the surveying measurements and the field work necessary for the production of large-scale photogrammetric products and proposes an alternative method on the basis of which the necessary control points instead of being measured with surveying procedures are chosen from a dense and accurate point cloud. Using this point cloud also as a surface model, the only field work necessary is the scanning of the object and image acquisition, which need not be subject to strict planning. To evaluate the proposed method an algorithm and the complementary interface were produced that allow the parallel manipulation of 3D point clouds and images and through which single image procedures take place. The paper concludes by presenting the results of a case study in the ancient temple of Hephaestus in Athens and by providing a set of guidelines for implementing effectively the method.


Sign in / Sign up

Export Citation Format

Share Document