scholarly journals Improvement of 3D Power Line Extraction from Multiple Low-Cost UAV Imagery Using Wavelet Analysis

Sensors ◽  
2019 ◽  
Vol 19 (3) ◽  
pp. 700 ◽  
Author(s):  
Anna Fryskowska

Three-dimensional (3D) mapping of power lines is very important for power line inspection. Many remotely-sensed data products like light detection and ranging (LiDAR) have been already studied for power line surveys. More and more data are being obtained via photogrammetric measurements. This increases the need for the implementation of advanced processing techniques. In recent years, there have been several developments in visualisation techniques using UAV (unmanned aerial vehicle) platform photography. The most modern of such imaging systems have the ability to generate dense point clouds. However, image-based point cloud accuracy is very often various (unstable) and dependent on the radiometric quality of images and the efficiency of image processing algorithms. The main factor influencing the point cloud quality is noise. Such problems usually arise with data obtained via low-cost UAV platforms. Therefore, generated point clouds representing power lines are usually incomplete and noisy. To obtain a complete and accurate 3D model of power lines and towers, it is necessary to develop improved data processing algorithms. The experiment tested the algorithms on power lines with different voltages. This paper presents the wavelet-based method of processing data acquired with a low-cost UAV camera. The proposed, original method involves the application of algorithms for coarse filtration and precise filtering. In addition, a new way of calculating the recommended flight height was proposed. At the end, the accuracy assessment of this two-stage filtration process was examined. For this, point quality indices were proposed. The experimental results show that the proposed algorithm improves the quality of low-cost point clouds. The proposed methods improve the accuracy of determining the parameters of the lines by more than twice. About 10% of noise is reduced by using the wavelet-based approach.

Author(s):  
T. Guo ◽  
A. Capra ◽  
M. Troyer ◽  
A. Gruen ◽  
A. J. Brooks ◽  
...  

Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.


Author(s):  
B. Sirmacek ◽  
R. Lindenbergh

Low-cost sensor generated 3D models can be useful for quick 3D urban model updating, yet the quality of the models is questionable. In this article, we evaluate the reliability of an automatic point cloud generation method using multi-view iPhone images or an iPhone video file as an input. We register such automatically generated point cloud on a TLS point cloud of the same object to discuss accuracy, advantages and limitations of the iPhone generated point clouds. For the chosen example showcase, we have classified 1.23% of the iPhone point cloud points as outliers, and calculated the mean of the point to point distances to the TLS point cloud as 0.11 m. Since a TLS point cloud might also include measurement errors and noise, we computed local noise values for the point clouds from both sources. Mean (μ) and standard deviation (&amp;sigma;) of roughness histograms are calculated as (μ<sub>1</sub> = 0.44 m., &amp;sigma;<sub>1</sub> = 0.071 m.) and (μ<sub>2</sub> = 0.025 m., &amp;sigma;<sub>2</sub> = 0.037 m.) for the iPhone and TLS point clouds respectively. Our experimental results indicate possible usage of the proposed automatic 3D model generation framework for 3D urban map updating, fusion and detail enhancing, quick and real-time change detection purposes. However, further insights should be obtained first on the circumstances that are needed to guarantee a successful point cloud generation from smartphone images.


Author(s):  
F. Alidoost ◽  
H. Arefi

Nowadays, Unmanned Aerial System (UAS)-based photogrammetry offers an affordable, fast and effective approach to real-time acquisition of high resolution geospatial information and automatic 3D modelling of objects for numerous applications such as topography mapping, 3D city modelling, orthophoto generation, and cultural heritages preservation. In this paper, the capability of four different state-of-the-art software packages as 3DSurvey, Agisoft Photoscan, Pix4Dmapper Pro and SURE is examined to generate high density point cloud as well as a Digital Surface Model (DSM) over a historical site. The main steps of this study are including: image acquisition, point cloud generation, and accuracy assessment. The overlapping images are first captured using a quadcopter and next are processed by different software to generate point clouds and DSMs. In order to evaluate the accuracy and quality of point clouds and DSMs, both visual and geometric assessments are carry out and the comparison results are reported.


Author(s):  
S. Altman ◽  
W. Xiao ◽  
B. Grayson

Terrestrial photogrammetry is an accessible method of 3D digital modelling, and can be done with low-cost consumer grade equipment. Globally there are many undocumented buildings, particularly in the developing world, that could benefit from 3D modelling for documentation, redesign or restoration. Areas with buildings at risk of destruction by natural disaster or war could especially benefit. This study considers a range of variables that affect the quality of photogrammetric results. Different point clouds of the same building are produced with different variables, and they are systematically tested to see how the output was affected. This is done by geometrically comparing them to a laser scanned point cloud of the same building. It finally considers how best results can be achieved for different applications, how to mitigate negative effects, and the limits of this technique.


2020 ◽  
Vol 9 (11) ◽  
pp. 656
Author(s):  
Muhammad Hamid Chaudhry ◽  
Anuar Ahmad ◽  
Qudsia Gulzar

Unmanned Aerial Vehicles (UAVs) as a surveying tool are mainly characterized by a large amount of data and high computational cost. This research investigates the use of a small amount of data with less computational cost for more accurate three-dimensional (3D) photogrammetric products by manipulating UAV surveying parameters such as flight lines pattern and image overlap percentages. Sixteen photogrammetric projects with perpendicular flight plans and a variation of 55% to 85% side and forward overlap were processed in Pix4DMapper. For UAV data georeferencing and accuracy assessment, 10 Ground Control Points (GCPs) and 18 Check Points (CPs) were used. Comparative analysis was done by incorporating the median of tie points, the number of 3D point cloud, horizontal/vertical Root Mean Square Error (RMSE), and large-scale topographic variations. The results show that an increased forward overlap also increases the median of the tie points, and an increase in both side and forward overlap results in the increased number of point clouds. The horizontal accuracy of 16 projects varies from ±0.13m to ±0.17m whereas the vertical accuracy varies from ± 0.09 m to ± 0.32 m. However, the lowest vertical RMSE value was not for highest overlap percentage. The tradeoff among UAV surveying parameters can result in high accuracy products with less computational cost.


2020 ◽  
Vol 12 (8) ◽  
pp. 1240 ◽  
Author(s):  
Xabier Blanch ◽  
Antonio Abellan ◽  
Marta Guinau

The emerging use of photogrammetric point clouds in three-dimensional (3D) monitoring processes has revealed some constraints with respect to the use of LiDAR point clouds. Oftentimes, point clouds (PC) obtained by time-lapse photogrammetry have lower density and precision, especially when Ground Control Points (GCPs) are not available or the camera system cannot be properly calibrated. This paper presents a new workflow called Point Cloud Stacking (PCStacking) that overcomes these restrictions by making the most of the iterative solutions in both camera position estimation and internal calibration parameters that are obtained during bundle adjustment. The basic principle of the stacking algorithm is straightforward: it computes the median of the Z coordinates of each point for multiple photogrammetric models to give a resulting PC with a greater precision than any of the individual PC. The different models are reconstructed from images taken simultaneously from, at least, five points of view, reducing the systematic errors associated with the photogrammetric reconstruction workflow. The algorithm was tested using both a synthetic point cloud and a real 3D dataset from a rock cliff. The synthetic data were created using mathematical functions that attempt to emulate the photogrammetric models. Real data were obtained by very low-cost photogrammetric systems specially developed for this experiment. Resulting point clouds were improved when applying the algorithm in synthetic and real experiments, e.g., 25th and 75th error percentiles were reduced from 3.2 cm to 1.4 cm in synthetic tests and from 1.5 cm to 0.5 cm in real conditions.


Author(s):  
M. Zhou ◽  
K. Y. Li ◽  
J. H. Wang ◽  
C. R. Li ◽  
G. E. Teng ◽  
...  

<p><strong>Abstract.</strong> UAV LiDAR systems have unique advantage in acquiring 3D geo-information of the targets and the expenses are very reasonable; therefore, they are capable of security inspection of high-voltage power lines. There are already several methods for power line extraction from LiDAR point cloud data. However, the existing methods either introduce classification errors during point cloud filtering, or occasionally unable to detect multiple power lines in vertical arrangement. This paper proposes and implements an automatic power line extraction method based on 3D spatial features. Different from the existing power line extraction methods, the proposed method processes the LiDAR point cloud data vertically, therefore, the possible location of the power line in point cloud data can be predicted without filtering. Next, segmentation is conducted on candidates of power line using 3D region growing method. Then, linear point sets are extracted by linear discriminant method in this paper. Finally, power lines are extracted from the candidate linear point sets based on extension and direction features. The effectiveness and feasibility of the proposed method were verified by real data of UAV LiDAR point cloud data in Sichuan, China. The average correct extraction rate of power line points is 98.18%.</p>


Author(s):  
G. Jozkow ◽  
P. Wieczorek ◽  
M. Karpina ◽  
A. Walicka ◽  
A. Borkowski

The Velodyne HDL-32E laser scanner is used more frequently as main mapping sensor in small commercial UASs. However, there is still little information about the actual accuracy of point clouds collected with such UASs. This work evaluates empirically the accuracy of the point cloud collected with such UAS. Accuracy assessment was conducted in four aspects: impact of sensors on theoretical point cloud accuracy, trajectory reconstruction quality, and internal and absolute point cloud accuracies. Theoretical point cloud accuracy was evaluated by calculating 3D position error knowing errors of used sensors. The quality of trajectory reconstruction was assessed by comparing position and attitude differences from forward and reverse EKF solution. Internal and absolute accuracies were evaluated by fitting planes to 8 point cloud samples extracted for planar surfaces. In addition, the absolute accuracy was also determined by calculating point 3D distances between LiDAR UAS and reference TLS point clouds. Test data consisted of point clouds collected in two separate flights performed over the same area. Executed experiments showed that in tested UAS, the trajectory reconstruction, especially attitude, has significant impact on point cloud accuracy. Estimated absolute accuracy of point clouds collected during both test flights was better than 10&amp;thinsp;cm, thus investigated UAS fits mapping-grade category.


Author(s):  
T. Guo ◽  
A. Capra ◽  
M. Troyer ◽  
A. Gruen ◽  
A. J. Brooks ◽  
...  

Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.


Author(s):  
J. Markiewicz ◽  
S. Łapiński ◽  
M. Pilarska ◽  
R. Bieńkowski ◽  
A. Kaliszewska

In this paper the possibility of using the Xiaomi 4K action cameras as a low-cost sensor for the generation of high resolution documentation of architecture and architectural elements in the field of Cultural Heritage was analysed. For that purpose a series of images was acquired together with tachometric measurements to determine the ground control points. Additionally TLS data was collected, which was treated as a reference. For the purpose of point cloud generation the Structure-from-motion (SfM) and Multi- View Stereo (MVS) approaches were used. The following parameters of the collected data and the resulting documentation were tested: the interior orientation parameters analysis, quality of the Xiaomi built-in Lenses Distortion Correction; the accuracy of the orientation on ground control and check points, the point cloud density; the flatness of the walls; the discrepancies between point clouds derived from the low-cost cameras and TLS data, shape of the architectural details based on cross-section analysis. After the analysis of the obtained results it can be concluded that the Xiaomi 4K low-cost sensors are well suited for the purpose of documentation of architecture and architectural details. All the data for the presented investigation were acquired at the baroque residence of the Bieliński Palace in Otwock Wielki in Poland.


Sign in / Sign up

Export Citation Format

Share Document