scholarly journals Analysis of Point Cloud Generation from UAS Images

Author(s):  
S. Ostrowski ◽  
G. Jóźków ◽  
C. Toth ◽  
B. Vander Jagt

Unmanned Aerial Systems (UAS) allow for the collection of low altitude aerial images, along with other geospatial information from a variety of companion sensors. The images can then be processed using sophisticated algorithms from the Computer Vision (CV) field, guided by the traditional and established procedures from photogrammetry. Based on highly overlapped images, new software packages which were specifically developed for UAS technology can easily create ground models, such as Point Clouds (PC), Digital Surface Model (DSM), orthoimages, etc. The goal of this study is to compare the performance of three different software packages, focusing on the accuracy of the 3D products they produce. Using a Nikon D800 camera installed on an ocotocopter UAS platform, images were collected during subsequent field tests conducted over the Olentangy River, north from the Ohio State University campus. Two areas around bike bridges on the Olentangy River Trail were selected because of the challenge the packages would have in creating accurate products; matching pixels over the river and dense canopy on the shore presents difficult scenarios to model. Ground Control Points (GCP) were gathered at each site to tie the models to a local coordinate system and help assess the absolute accuracy for each package. In addition, the models were also relatively compared to each other using their PCs.

Drones ◽  
2020 ◽  
Vol 4 (1) ◽  
pp. 6 ◽  
Author(s):  
Ryan G. Howell ◽  
Ryan R. Jensen ◽  
Steven L. Petersen ◽  
Randy T. Larsen

In situ measurements of sagebrush have traditionally been expensive and time consuming. Currently, improvements in small Unmanned Aerial Systems (sUAS) technology can be used to quantify sagebrush morphology and community structure with high resolution imagery on western rangelands, especially in sensitive habitat of the Greater sage-grouse (Centrocercus urophasianus). The emergence of photogrammetry algorithms to generate 3D point clouds from true color imagery can potentially increase the efficiency and accuracy of measuring shrub height in sage-grouse habitat. Our objective was to determine optimal parameters for measuring sagebrush height including flight altitude, single- vs. double- pass, and continuous vs. pause features. We acquired imagery using a DJI Mavic Pro 2 multi-rotor Unmanned Aerial Vehicle (UAV) equipped with an RGB camera, flown at 30.5, 45, 75, and 120 m and implementing single-pass and double-pass methods, using continuous flight and paused flight for each photo method. We generated a Digital Surface Model (DSM) from which we derived plant height, and then performed an accuracy assessment using on the ground measurements taken at the time of flight. We found high correlation between field measured heights and estimated heights, with a mean difference of approximately 10 cm (SE = 0.4 cm) and little variability in accuracy between flights with different heights and other parameters after statistical correction using linear regression. We conclude that higher altitude flights using a single-pass method are optimal to measure sagebrush height due to lower requirements in data storage and processing time.


Author(s):  
F. Alidoost ◽  
H. Arefi

Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2834
Author(s):  
Billur Kazaz ◽  
Subhadipto Poddar ◽  
Saeed Arabi ◽  
Michael A. Perez ◽  
Anuj Sharma ◽  
...  

Construction activities typically create large amounts of ground disturbance, which can lead to increased rates of soil erosion. Construction stormwater practices are used on active jobsites to protect downstream waterbodies from offsite sediment transport. Federal and state regulations require routine pollution prevention inspections to ensure that temporary stormwater practices are in place and performing as intended. This study addresses the existing challenges and limitations in the construction stormwater inspections and presents a unique approach for performing unmanned aerial system (UAS)-based inspections. Deep learning-based object detection principles were applied to identify and locate practices installed on active construction sites. The system integrates a post-processing stage by clustering results. The developed framework consists of data preparation with aerial inspections, model training, validation of the model, and testing for accuracy. The developed model was created from 800 aerial images and was used to detect four different types of construction stormwater practices at 100% accuracy on the Mean Average Precision (MAP) with minimal false positive detections. Results indicate that object detection could be implemented on UAS-acquired imagery as a novel approach to construction stormwater inspections and provide accurate results for site plan comparisons by rapidly detecting the quantity and location of field-installed stormwater practices.


Aerospace ◽  
2020 ◽  
Vol 7 (11) ◽  
pp. 158
Author(s):  
Andrew Weinert

As unmanned aerial systems (UASs) increasingly integrate into the US national airspace system, there is an increasing need to characterize how commercial and recreational UASs may encounter each other. To inform the development and evaluation of safety critical technologies, we demonstrate a methodology to analytically calculate all potential relative geometries between different UAS operations performing inspection missions. This method is based on a previously demonstrated technique that leverages open source geospatial information to generate representative unmanned aircraft trajectories. Using open source data and parallel processing techniques, we performed trillions of calculations to estimate the relative horizontal distance between geospatial points across sixteen locations.


Author(s):  
Leena Matikainen ◽  
Juha Hyyppä ◽  
Paula Litkey

During the last 20 years, airborne laser scanning (ALS), often combined with multispectral information from aerial images, has shown its high feasibility for automated mapping processes. Recently, the first multispectral airborne laser scanners have been launched, and multispectral information is for the first time directly available for 3D ALS point clouds. This article discusses the potential of this new single-sensor technology in map updating, especially in automated object detection and change detection. For our study, Optech Titan multispectral ALS data over a suburban area in Finland were acquired. Results from a random forests analysis suggest that the multispectral intensity information is useful for land cover classification, also when considering ground surface objects and classes, such as roads. An out-of-bag estimate for classification error was about 3% for separating classes asphalt, gravel, rocky areas and low vegetation from each other. For buildings and trees, it was under 1%. According to feature importance analyses, multispectral features based on several channels were more useful that those based on one channel. Automatic change detection utilizing the new multispectral ALS data, an old digital surface model (DSM) and old building vectors was also demonstrated. Overall, our first analyses suggest that the new data are very promising for further increasing the automation level in mapping. The multispectral ALS technology is independent of external illumination conditions, and intensity images produced from the data do not include shadows. These are significant advantages when the development of automated classification and change detection procedures is considered.


2019 ◽  
Vol 7 (1) ◽  
pp. 1-20
Author(s):  
Fotis Giagkas ◽  
Petros Patias ◽  
Charalampos Georgiadis

The purpose of this study is the photogrammetric survey of a forested area using unmanned aerial vehicles (UAV), and the estimation of the digital terrain model (DTM) of the area, based on the photogrammetrically produced digital surface model (DSM). Furthermore, through the classification of the height difference between a DSM and a DTM, a vegetation height model is estimated, and a vegetation type map is produced. Finally, the generated DTM was used in a hydrological analysis study to determine its suitability compared to the usage of the DSM. The selected study area was the forest of Seih-Sou (Thessaloniki). The DTM extraction methodology applies classification and filtering of point clouds, and aims to produce a surface model including only terrain points (DTM). The method yielded a DTM that functioned satisfactorily as a basis for the hydrological analysis. Also, by classifying the DSM–DTM difference, a vegetation height model was generated. For the photogrammetric survey, 495 aerial images were used, taken by a UAV from a height of ∼200 m. A total of 44 ground control points were measured with an accuracy of 5 cm. The accuracy of the aerial triangulation was approximately 13 cm. The produced dense point cloud, counted 146 593 725 points.


2020 ◽  
Vol 12 (18) ◽  
pp. 3030
Author(s):  
Ram Avtar ◽  
Stanley Anak Suab ◽  
Mohd Shahrizan Syukur ◽  
Alexius Korom ◽  
Deha Agus Umarhadi ◽  
...  

The information on biophysical parameters—such as height, crown area, and vegetation indices such as the normalized difference vegetation index (NDVI) and normalized difference red edge index (NDRE)—are useful to monitor health conditions and the growth of oil palm trees in precision agriculture practices. The use of multispectral sensors mounted on unmanned aerial vehicles (UAV) provides high spatio-temporal resolution data to study plant health. However, the influence of UAV altitude when extracting biophysical parameters of oil palm from a multispectral sensor has not yet been well explored. Therefore, this study utilized the MicaSense RedEdge sensor mounted on a DJI Phantom–4 UAV platform for aerial photogrammetry. Three different close-range multispectral aerial images were acquired at a flight altitude of 20 m, 60 m, and 80 m above ground level (AGL) over the young oil palm plantation area in Malaysia. The images were processed using the structure from motion (SfM) technique in Pix4DMapper software and produced multispectral orthomosaic aerial images, digital surface model (DSM), and point clouds. Meanwhile, canopy height models (CHM) were generated by subtracting DSM and digital elevation models (DEM). Oil palm tree heights and crown projected area (CPA) were extracted from CHM and the orthomosaic. NDVI and NDRE were calculated using the red, red-edge, and near-infrared spectral bands of orthomosaic data. The accuracy of the extracted height and CPA were evaluated by assessing accuracy from a different altitude of UAV data with ground measured CPA and height. Correlations, root mean square deviation (RMSD), and central tendency were used to compare UAV extracted biophysical parameters with ground data. Based on our results, flying at an altitude of 60 m is the best and optimal flight altitude for estimating biophysical parameters followed by 80 m altitude. The 20 m UAV altitude showed a tendency of overestimation in biophysical parameters of young oil palm and is less consistent when extracting parameters among the others. The methodology and results are a step toward precision agriculture in the oil palm plantation area.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6051
Author(s):  
Piyush Garg ◽  
Roya Nasimi ◽  
Ali Ozdagli ◽  
Su Zhang ◽  
David Dennis Lee Mascarenas ◽  
...  

Measurement of bridge displacements is important for ensuring the safe operation of railway bridges. Traditionally, contact sensors such as Linear Variable Displacement Transducers (LVDT) and accelerometers have been used to measure the displacement of the railway bridges. However, these sensors need significant effort in installation and maintenance. Therefore, railroad management agencies are interested in new means to measure bridge displacements. This research focuses on mounting Laser Doppler Vibrometer (LDV) on an Unmanned Aerial System (UAS) to enable contact-free transverse dynamic displacement of railroad bridges. Researchers conducted three field tests by flying the Unmanned Aerial Systems Laser Doppler Vibrometer (UAS-LDV) 1.5 m away from the ground and measured the displacement of a moving target at various distances. The accuracy of the UAS-LDV measurements was compared to the Linear Variable Differential Transducer (LVDT) measurements. The results of the three field tests showed that the proposed system could measure non-contact, reference-free dynamic displacement with an average peak and root mean square (RMS) error for the three experiments of 10% and 8% compared to LVDT, respectively. Such errors are acceptable for field measurements in railroads, as the interest prior to bridge monitoring implementation of a new approach is to demonstrate similar success for different flights, as reported in the three results. This study also identified barriers for industrial adoption of this technology and proposed operational development practices for both technical and cost-effective implementation.


AI ◽  
2020 ◽  
Vol 1 (2) ◽  
pp. 166-179 ◽  
Author(s):  
Ziyang Tang ◽  
Xiang Liu ◽  
Hanlin Chen ◽  
Joseph Hupy ◽  
Baijian Yang

Unmanned Aerial Systems, hereafter referred to as UAS, are of great use in hazard events such as wildfire due to their ability to provide high-resolution video imagery over areas deemed too dangerous for manned aircraft and ground crews. This aerial perspective allows for identification of ground-based hazards such as spot fires and fire lines, and to communicate this information with fire fighting crews. Current technology relies on visual interpretation of UAS imagery, with little to no computer-assisted automatic detection. With the help of big labeled data and the significant increase of computing power, deep learning has seen great successes on object detection with fixed patterns, such as people and vehicles. However, little has been done for objects, such as spot fires, with amorphous and irregular shapes. Additional challenges arise when data are collected via UAS as high-resolution aerial images or videos; an ample solution must provide reasonable accuracy with low delays. In this paper, we examined 4K ( 3840 × 2160 ) videos collected by UAS from a controlled burn and created a set of labeled video sets to be shared for public use. We introduce a coarse-to-fine framework to auto-detect wildfires that are sparse, small, and irregularly-shaped. The coarse detector adaptively selects the sub-regions that are likely to contain the objects of interest while the fine detector passes only the details of the sub-regions, rather than the entire 4K region, for further scrutiny. The proposed two-phase learning therefore greatly reduced time overhead and is capable of maintaining high accuracy. Compared against the real-time one-stage object backbone of YoloV3, the proposed methods improved the mean average precision(mAP) from 0 . 29 to 0 . 67 , with an average inference speed of 7.44 frames per second. Limitations and future work are discussed with regard to the design and the experiment results.


Author(s):  
D. Frommholz ◽  
M. Linkiewicz ◽  
A. M. Poznanska

This paper proposes an in-line method for the simplified reconstruction of city buildings from nadir and oblique aerial images that at the same time are being used for multi-source texture mapping with minimal resampling. Further, the resulting unrectified texture atlases are analyzed for fac¸ade elements like windows to be reintegrated into the original 3D models. Tests on real-world data of Heligoland/ Germany comprising more than 800 buildings exposed a median positional deviation of 0.31 m at the fac¸ades compared to the cadastral map, a correctness of 67% for the detected windows and good visual quality when being rendered with GPU-based perspective correction. As part of the process building reconstruction takes the oriented input images and transforms them into dense point clouds by semi-global matching (SGM). The point sets undergo local RANSAC-based regression and topology analysis to detect adjacent planar surfaces and determine their semantics. Based on this information the roof, wall and ground surfaces found get intersected and limited in their extension to form a closed 3D building hull. For texture mapping the hull polygons are projected into each possible input bitmap to find suitable color sources regarding the coverage and resolution. Occlusions are detected by ray-casting a full-scale digital surface model (DSM) of the scene and stored in pixel-precise visibility maps. These maps are used to derive overlap statistics and radiometric adjustment coefficients to be applied when the visible image parts for each building polygon are being copied into a compact texture atlas without resampling whenever possible. The atlas bitmap is passed to a commercial object-based image analysis (OBIA) tool running a custom rule set to identify windows on the contained fac¸ade patches. Following multi-resolution segmentation and classification based on brightness and contrast differences potential window objects are evaluated against geometric constraints and conditionally grown, fused and filtered morphologically. The output polygons are vectorized and reintegrated into the previously reconstructed buildings by sparsely ray-tracing their vertices. Finally the enhanced 3D models get stored as textured geometry for visualization and semantically annotated ”LOD-2.5” CityGML objects for GIS applications.


Sign in / Sign up

Export Citation Format

Share Document