scholarly journals Damage Signature Generation of Revetment Surface along Urban Rivers Using UAV-Based Mapping

2020 ◽  
Vol 9 (4) ◽  
pp. 283
Author(s):  
Ting Chen ◽  
Haiqing He ◽  
Dajun Li ◽  
Puyang An ◽  
Zhenyang Hui

The all-embracing inspection of geometry structures of revetments along urban rivers using the conventional field visual inspection is technically complex and time-consuming. In this study, an approach using dense point clouds derived from low-cost unmanned aerial vehicle (UAV) photogrammetry is proposed to automatically and efficiently recognize the signatures of revetment damage. To quickly and accurately recover the finely detailed surface of a revetment, an object space-based dense matching approach, that is, region growing coupled with semi-global matching, is exploited to generate pixel-by-pixel dense point clouds for characterizing the signatures of revetment damage. Then, damage recognition is conducted using a proposed operator, that is, a self-adaptive and multiscale gradient operator, which is designed to extract the damaged regions with different sizes in the slope intensity image of the revetment. A revetment with slope protection along urban rivers is selected to evaluate the performance of damage recognition. Results indicate that the proposed approach can be considered an effective alternative to field visual inspection for revetment damage recognition along urban rivers because our method not only recovers the finely detailed surface of the revetment but also remarkably improves the accuracy of revetment damage recognition.

2019 ◽  
Vol 93 (3) ◽  
pp. 411-429 ◽  
Author(s):  
Maria Immacolata Marzulli ◽  
Pasi Raumonen ◽  
Roberto Greco ◽  
Manuela Persia ◽  
Patrizia Tartarino

Abstract Methods for the three-dimensional (3D) reconstruction of forest trees have been suggested for data from active and passive sensors. Laser scanner technologies have become popular in the last few years, despite their high costs. Since the improvements in photogrammetric algorithms (e.g. structure from motion—SfM), photographs have become a new low-cost source of 3D point clouds. In this study, we use images captured by a smartphone camera to calculate dense point clouds of a forest plot using SfM. Eighteen point clouds were produced by changing the densification parameters (Image scale, Point density, Minimum number of matches) in order to investigate their influence on the quality of the point clouds produced. In order to estimate diameter at breast height (d.b.h.) and stem volumes, we developed an automatic method that extracts the stems from the point cloud and then models them with cylinders. The results show that Image scale is the most influential parameter in terms of identifying and extracting trees from the point clouds. The best performance with cylinder modelling from point clouds compared to field data had an RMSE of 1.9 cm and 0.094 m3, for d.b.h. and volume, respectively. Thus, for forest management and planning purposes, it is possible to use our photogrammetric and modelling methods to measure d.b.h., stem volume and possibly other forest inventory metrics, rapidly and without felling trees. The proposed methodology significantly reduces working time in the field, using ‘non-professional’ instruments and automating estimates of dendrometric parameters.


Author(s):  
R. Ravanelli ◽  
A. Nascetti ◽  
M. Crespi

Today range cameras are widespread low-cost sensors based on two different principles of operation: we can distinguish between Structured Light (SL) range cameras (Kinect v1, Structure Sensor, ...) and Time Of Flight (ToF) range cameras (Kinect v2, ...). Both the types are easy to use 3D scanners, able to reconstruct dense point clouds at high frame rate. However the depth maps obtained are often noisy and not enough accurate, therefore it is generally essential to improve their quality. Standard RGB cameras can be a valuable solution to solve such issue. The aim of this paper is therefore to evaluate the integration feasibility of these two different 3D modelling techniques, characterized by complementary features and based on standard low-cost sensors. <br><br> For this purpose, a 3D model of a DUPLO<sup>TM</sup> bricks construction was reconstructed both with the Kinect v2 range camera and by processing one stereo pair acquired with a Canon Eos 1200D DSLR camera. The scale of the photgrammetric model was retrieved from the coordinates measured by Kinect v2. The preliminary results are encouraging and show that the foreseen integration could lead to an higher metric accuracy and a major level of completeness with respect to that obtained by using only separated techniques.


2019 ◽  
Vol 7 (1) ◽  
pp. 45-66 ◽  
Author(s):  
Ankit Kumar Verma ◽  
Mary Carol Bourke

Abstract. We have generated sub-millimetre-resolution DEMs of weathered rock surfaces using SfM photogrammetry techniques. We apply a close-range method based on structure-from-motion (SfM) photogrammetry in the field and use it to generate high-resolution topographic data for weathered boulders and bedrock. The method was pilot tested on extensively weathered Triassic Moenkopi sandstone outcrops near Meteor Crater in Arizona. Images were taken in the field using a consumer-grade DSLR camera and were processed in commercially available software to build dense point clouds. The point clouds were registered to a local 3-D coordinate system (x, y, z), which was developed using a specially designed triangle-coded control target and then exported as digital elevation models (DEMs). The accuracy of the DEMs was validated under controlled experimental conditions. A number of checkpoints were used to calculate errors. We also evaluated the effects of image and camera parameters on the accuracy of our DEMs. We report a horizontal error of 0.5 mm and vertical error of 0.3 mm in our experiments. Our approach provides a low-cost method for obtaining very high-resolution topographic data on weathered rock surfaces (area < 10 m2). The results from our case study confirm the efficacy of the method at this scale and show that the data acquisition equipment is sufficiently robust and portable. This is particularly important for field conditions in remote locations or steep terrain where portable and efficient methods are required.


Author(s):  
P. Liu ◽  
Y. C. Li ◽  
W. Hu ◽  
X. B. Ding

Oblique photography technology as an excellent method for 3-D city model construction has brought itself to large-scale recognition and undeniable high social status. Tilt and vertical images with the high overlaps and different visual angles can produce a large number of dense matching point clouds data with spectral information. This paper presents a method of buildings reconstruction with stereo matching dense point clouds from aerial oblique images, which includes segmentation of buildings and reconstruction of building roofs. We summarize the characteristics of stereo matching point clouds from aerial oblique images and outline the problems with existing methods. Then we present the method for segmentation of building roofs, which based on colors and geometrical derivatives such as normal and curvature. Finally, a building reconstruction approach is developed based on the geometrical relationship. The experiment and analysis show that the methods are effective on building reconstruction with stereo matching point clouds from aerial oblique images.


Author(s):  
X. Huang ◽  
R. Qin ◽  
M. Chen

<p><strong>Abstract.</strong> Stereo dense matching has already been one of the dominant tools in 3D reconstruction of urban regions, due to its low cost and high flexibility in generating 3D points. However, the image-derived 3D points are often inaccurate around building edges, which limit its use in several vision tasks (e.g. building modelling). To generate 3D point clouds or digital surface models (DSM) with sharp boundaries, this paper integrates robustly matched lines for improving dense matching, and proposes a non-local disparity refinement of building edges through an iterative least squares plane adjustment approach. In our method, we first extract and match straight lines in images using epipolar constraints, then detect building edges from these straight lines by comparing matching results on both sides of straight lines, and finally we develop a non-local disparity refinement method through an iterative least squares plane adjustment constrained by matched straight lines to yield sharper and more accurate edges. Experiments conducted on both satellite and aerial data demonstrate that our proposed method is able to generate more accurate DSM with sharper object boundaries.</p>


Author(s):  
M. Koehl ◽  
T. Delacourt ◽  
C. Boutry

This paper presents a project of recording and modelling tunnels, traffic circles and roads from multiple sensors. The aim is the representation and the accurate 3D modelling of a selection of road infrastructures as dense point clouds in order to extract profiles and metrics from it. Indeed, these models will be used for the sizing of infrastructures in order to simulate exceptional convoy truck routes. The objective is to extract directly from the point clouds the heights, widths and lengths of bridges and tunnels, the diameter of gyrating and to highlight potential obstacles for a convoy. Light, mobile and fast acquisition approaches based on images and videos from a set of synchronized sensors have been tested in order to obtain useable point clouds. The presented solution is based on a combination of multiple low-cost cameras designed on an on-boarded device allowing dynamic captures. The experimental device containing &lt;i&gt;GoPro Hero4&lt;/i&gt; cameras has been set up and used for tests in static or mobile acquisitions. That way, various configurations have been tested by using multiple synchronized cameras. These configurations are discussed in order to highlight the best operational configuration according to the shape of the acquired objects. As the precise calibration of each sensor and its optics are major factors in the process of creation of accurate dense point clouds, and in order to reach the best quality available from such cameras, the estimation of the internal parameters of fisheye lenses of the cameras has been processed. Reference measures were also realized by using a 3D TLS (&lt;i&gt;Faro Focus 3D&lt;/i&gt;) to allow the accuracy assessment.


Author(s):  
M. Zacharek ◽  
P. Delis ◽  
M. Kedzierski ◽  
A. Fryskowska

These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.


Author(s):  
A. Murtiyoso ◽  
P. Grussenmeyer ◽  
T. Freville

Close-range photogrammetry is an image-based technique which has often been used for the 3D documentation of heritage objects. Recently, advances in the field of image processing and UAVs (Unmanned Aerial Vehicles) have resulted in a renewed interest in this technique. However, commercially ready-to-use UAVs are often equipped with smaller sensors in order to minimize payload and the quality of the documentation is still an issue. In this research, two commercial UAVs (the Sensefly Albris and DJI Phantom 3 Professional) were setup to record the 19<sup>th</sup> century St-Pierre-le-Jeune church in Strasbourg, France. Several software solutions (commercial and open source) were used to compare both UAVs’ images in terms of calibration, accuracy of external orientation, as well as dense matching. Results show some instability in regards to the calibration of Phantom 3, while the Albris had issues regarding its aerotriangulation results. Despite these shortcomings, both UAVs succeeded in producing dense point clouds of up to a few centimeters in accuracy, which is largely sufficient for the purposes of a city 3D GIS (Geographical Information System). The acquisition of close range images using UAVs also provides greater LoD flexibility in processing. These advantages over other methods such as the TLS (Terrestrial Laser Scanning) or terrestrial close range photogrammetry can be exploited in order for these techniques to complement each other.


Author(s):  
M. Hödel ◽  
T. Koch ◽  
L. Hoegner ◽  
U. Stilla

<p><strong>Abstract.</strong> Reconstruction of dense photogrammetric point clouds is often based on depth estimation of rectified image pairs by means of pixel-wise matching. The main drawback lies in the high computational complexity compared to that of the relatively straightforward task of laser triangulation. Dense image matching needs oriented and rectified images and looks for point correspondences between them. The search for these correspondences is based on two assumptions: pixels and their local neighborhood show a similar radiometry and image scenes are mostly homogeneous, meaning that neighboring points in one image are most likely also neighbors in the second. These rules are violated, however, at depth changes in the scene. Optimization strategies tend to find the best depth estimation based on the resulting disparities in the two images. One new field in neural networks is the estimation of a depth image from a single input image through learning geometric relations in images. These networks are able to find homogeneous areas as well as depth changes, but result in a much lower geometric accuracy of the estimated depth compared to dense matching strategies. In this paper, a method is proposed extending the Semi-Global-Matching algorithm by utilizing a-priori knowledge from a monocular depth estimating neural network to improve the point correspondence search by predicting the disparity range from the single-image depth estimation (SIDE). The method also saves resources through path optimization and parallelization. The algorithm is benchmarked on Middlebury data and results are presented both quantitatively and qualitatively.</p>


Author(s):  
M. Modiri ◽  
M. Masumi ◽  
A. Eftekhari

Automatic extraction of building roofs, street and vegetation are a prerequisite for many GIS (Geographic Information System) applications, such as urban planning and 3D building reconstruction. Nowadays with advances in image processing and image matching technique by using feature base and template base image matching technique together dense point clouds are available. Point clouds classification is an important step in automatic features extraction. Therefore, in this study, the classification of point clouds based on features color and shape are implemented. <br><br> We use two images by proper overlap getting by Ultracam-x camera in this study. The images are from Yasouj in IRAN. It is semi-urban area by building with different height. Our goal is classification buildings and vegetation in these points. <br><br> In this article, an algorithm is developed based on the color characteristics of the point’s cloud, using an appropriate DEM (Digital Elevation Model) and points clustering method. So that, firstly, trees and high vegetation are classified by using the point’s color characteristics and vegetation index. Then, bare earth DEM is used to separate ground and non-ground points. <br><br> Non-ground points are then divided into clusters based on height and local neighborhood. One or more clusters are initialized based on the maximum height of the points and then each cluster is extended by applying height and neighborhood constraints. Finally, planar roof segments are extracted from each cluster of points following a region-growing technique.


Sign in / Sign up

Export Citation Format

Share Document