scholarly journals Post-Earthquake Recovery Phase Monitoring and Mapping Based on UAS Data

2020 ◽  
Vol 9 (7) ◽  
pp. 447
Author(s):  
Nikolaos Soulakellis ◽  
Christos Vasilakos ◽  
Stamatis Chatzistamatis ◽  
Dimitris Kavroudakis ◽  
Georgios Tataris ◽  
...  

Geoinformatics plays an essential role during the recovery phase of a post-earthquake situation. The aim of this paper is to present the methodology followed and the results obtained by the utilization of Unmanned Aircraft Systems (UASs) 4K-video footage processing and the automation of geo-information methods targeted at both monitoring the demolition process and mapping the demolished buildings. The field campaigns took place on the traditional settlement of Vrisa (Lesvos, Greece), which was heavily damaged by a strong earthquake (Mw=6.3) on June 12th, 2017. For this purpose, a flight campaign took place on 3rd February 2019 for collecting aerial 4K video footage using an Unmanned Aircraft. The Structure from Motion (SfM) method was applied on frames which derived from the 4K video footage, for producing accurate and very detailed 3D point clouds, as well as the Digital Surface Model (DSM) of the building stock of the Vrisa traditional settlement, twenty months after the earthquake. This dataset has been compared with the corresponding one which derived from 25th July 2017, a few days after the earthquake. Two algorithms have been developed for detecting the demolished buildings of the affected area, based on the DSMs and 3D point clouds, correspondingly. The results obtained have been tested through field studies and demonstrate that this methodology is feasible and effective in building demolition detection, giving very accurate results (97%) and, in parallel, is easily applicable and suit well for rapid demolition mapping during the recovery phase of a post-earthquake scenario. The significant advantage of the proposed methodology is its ability to provide reliable results in a very low cost and time-efficient way and to serve all stakeholders and national and local organizations that are responsible for post-earthquake management.

2021 ◽  
Vol 10 (5) ◽  
pp. 345
Author(s):  
Konstantinos Chaidas ◽  
George Tataris ◽  
Nikolaos Soulakellis

In a post-earthquake scenario, the semantic enrichment of 3D building models with seismic damage is crucial from the perspective of disaster management. This paper aims to present the methodology and the results for the Level of Detail 3 (LOD3) building modelling (after an earthquake) with the enrichment of the semantics of the seismic damage based on the European Macroseismic Scale (EMS-98). The study area is the Vrisa traditional settlement on the island of Lesvos, Greece, which was affected by a devastating earthquake of Mw = 6.3 on 12 June 2017. The applied methodology consists of the following steps: (a) unmanned aircraft systems (UAS) nadir and oblique images are acquired and photogrammetrically processed for 3D point cloud generation, (b) 3D building models are created based on 3D point clouds and (c) 3D building models are transformed into a LOD3 City Geography Markup Language (CityGML) standard with enriched semantics of the related seismic damage of every part of the building (walls, roof, etc.). The results show that in following this methodology, CityGML LOD3 models can be generated and enriched with buildings’ seismic damage. These models can assist in the decision-making process during the recovery phase of a settlement as well as be the basis for its monitoring over time. Finally, these models can contribute to the estimation of the reconstruction cost of the buildings.


2019 ◽  
Vol 93 (3) ◽  
pp. 411-429 ◽  
Author(s):  
Maria Immacolata Marzulli ◽  
Pasi Raumonen ◽  
Roberto Greco ◽  
Manuela Persia ◽  
Patrizia Tartarino

Abstract Methods for the three-dimensional (3D) reconstruction of forest trees have been suggested for data from active and passive sensors. Laser scanner technologies have become popular in the last few years, despite their high costs. Since the improvements in photogrammetric algorithms (e.g. structure from motion—SfM), photographs have become a new low-cost source of 3D point clouds. In this study, we use images captured by a smartphone camera to calculate dense point clouds of a forest plot using SfM. Eighteen point clouds were produced by changing the densification parameters (Image scale, Point density, Minimum number of matches) in order to investigate their influence on the quality of the point clouds produced. In order to estimate diameter at breast height (d.b.h.) and stem volumes, we developed an automatic method that extracts the stems from the point cloud and then models them with cylinders. The results show that Image scale is the most influential parameter in terms of identifying and extracting trees from the point clouds. The best performance with cylinder modelling from point clouds compared to field data had an RMSE of 1.9 cm and 0.094 m3, for d.b.h. and volume, respectively. Thus, for forest management and planning purposes, it is possible to use our photogrammetric and modelling methods to measure d.b.h., stem volume and possibly other forest inventory metrics, rapidly and without felling trees. The proposed methodology significantly reduces working time in the field, using ‘non-professional’ instruments and automating estimates of dendrometric parameters.


Author(s):  
T. Guo ◽  
A. Capra ◽  
M. Troyer ◽  
A. Gruen ◽  
A. J. Brooks ◽  
...  

Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.


Author(s):  
M. Karpina ◽  
M. Jarząbek-Rychard ◽  
P. Tymków ◽  
A. Borkowski

Manual in-situ measurements of geometric tree parameters for the biomass volume estimation are time-consuming and economically non-effective. Photogrammetric techniques can be deployed in order to automate the measurement procedure. The purpose of the presented work is an automatic tree growth estimation based on Unmanned Aircraft Vehicle (UAV) imagery. The experiment was conducted in an agriculture test field with scots pine canopies. The data was collected using a Leica Aibotix X6V2 platform equipped with a Nikon D800 camera. Reference geometric parameters of selected sample plants were measured manually each week. In situ measurements were correlated with the UAV data acquisition. The correlation aimed at the investigation of optimal conditions for a flight and parameter settings for image acquisition. The collected images are processed in a state of the art tool resulting in a generation of dense 3D point clouds. The algorithm is developed in order to estimate geometric tree parameters from 3D points. Stem positions and tree tops are identified automatically in a cross section, followed by the calculation of tree heights. The automatically derived height values are compared to the reference measurements performed manually. The comparison allows for the evaluation of automatic growth estimation process. The accuracy achieved using UAV photogrammetry for tree heights estimation is about 5cm.


2019 ◽  
Vol 11 (10) ◽  
pp. 1204 ◽  
Author(s):  
Yue Pan ◽  
Yiqing Dong ◽  
Dalei Wang ◽  
Airong Chen ◽  
Zhen Ye

Three-dimensional (3D) digital technology is essential to the maintenance and monitoring of cultural heritage sites. In the field of bridge engineering, 3D models generated from point clouds of existing bridges is drawing increasing attention. Currently, the widespread use of the unmanned aerial vehicle (UAV) provides a practical solution for generating 3D point clouds as well as models, which can drastically reduce the manual effort and cost involved. In this study, we present a semi-automated framework for generating structural surface models of heritage bridges. To be specific, we propose to tackle this challenge via a novel top-down method for segmenting main bridge components, combined with rule-based classification, to produce labeled 3D models from UAV photogrammetric point clouds. The point clouds of the heritage bridge are generated from the captured UAV images through the structure-from-motion workflow. A segmentation method is developed based on the supervoxel structure and global graph optimization, which can effectively separate bridge components based on geometric features. Then, recognition by the use of a classification tree and bridge geometry is utilized to recognize different structural elements from the obtained segments. Finally, surface modeling is conducted to generate surface models of the recognized elements. Experiments using two bridges in China demonstrate the potential of the presented structural model reconstruction method using UAV photogrammetry and point cloud processing in 3D digital documentation of heritage bridges. By using given markers, the reconstruction error of point clouds can be as small as 0.4%. Moreover, the precision and recall of segmentation results using testing date are better than 0.8, and a recognition accuracy better than 0.8 is achieved.


Drones ◽  
2020 ◽  
Vol 4 (1) ◽  
pp. 6 ◽  
Author(s):  
Ryan G. Howell ◽  
Ryan R. Jensen ◽  
Steven L. Petersen ◽  
Randy T. Larsen

In situ measurements of sagebrush have traditionally been expensive and time consuming. Currently, improvements in small Unmanned Aerial Systems (sUAS) technology can be used to quantify sagebrush morphology and community structure with high resolution imagery on western rangelands, especially in sensitive habitat of the Greater sage-grouse (Centrocercus urophasianus). The emergence of photogrammetry algorithms to generate 3D point clouds from true color imagery can potentially increase the efficiency and accuracy of measuring shrub height in sage-grouse habitat. Our objective was to determine optimal parameters for measuring sagebrush height including flight altitude, single- vs. double- pass, and continuous vs. pause features. We acquired imagery using a DJI Mavic Pro 2 multi-rotor Unmanned Aerial Vehicle (UAV) equipped with an RGB camera, flown at 30.5, 45, 75, and 120 m and implementing single-pass and double-pass methods, using continuous flight and paused flight for each photo method. We generated a Digital Surface Model (DSM) from which we derived plant height, and then performed an accuracy assessment using on the ground measurements taken at the time of flight. We found high correlation between field measured heights and estimated heights, with a mean difference of approximately 10 cm (SE = 0.4 cm) and little variability in accuracy between flights with different heights and other parameters after statistical correction using linear regression. We conclude that higher altitude flights using a single-pass method are optimal to measure sagebrush height due to lower requirements in data storage and processing time.


Author(s):  
G. Stavropoulou ◽  
G. Tzovla ◽  
A. Georgopoulos

Over the past decade, large-scale photogrammetric products have been extensively used for the geometric documentation of cultural heritage monuments, as they combine metric information with the qualities of an image document. Additionally, the rising technology of terrestrial laser scanning has enabled the easier and faster production of accurate digital surface models (DSM), which have in turn contributed to the documentation of heavily textured monuments. However, due to the required accuracy of control points, the photogrammetric methods are always applied in combination with surveying measurements and hence are dependent on them. Along this line of thought, this paper explores the possibility of limiting the surveying measurements and the field work necessary for the production of large-scale photogrammetric products and proposes an alternative method on the basis of which the necessary control points instead of being measured with surveying procedures are chosen from a dense and accurate point cloud. Using this point cloud also as a surface model, the only field work necessary is the scanning of the object and image acquisition, which need not be subject to strict planning. To evaluate the proposed method an algorithm and the complementary interface were produced that allow the parallel manipulation of 3D point clouds and images and through which single image procedures take place. The paper concludes by presenting the results of a case study in the ancient temple of Hephaestus in Athens and by providing a set of guidelines for implementing effectively the method.


2021 ◽  
Vol 7 (2) ◽  
pp. 57-74
Author(s):  
Lamyaa Gamal EL-Deen Taha ◽  
A. I. Ramzi ◽  
A. Syarawi ◽  
A. Bekheet

Until recently, the most highly accurate digital surface models were obtained from airborne lidar. With the development of a new generation of large format digital photogrammetric aerial camera, a fully digital photogrammetric workflow became possible. Digital airborne images are sources for elevation extraction and orthophoto generation. This research concerned with the generation of digital surface models and orthophotos as applications from high-resolution images.  In this research, the following steps were performed. A Benchmark data of LIDAR and digital aerial camera have been used.  Firstly, image orientation, AT have been performed. Then the automatic digital surface model DSM generation has been produced from the digital aerial camera. Thirdly true digital ortho has been generated from the digital aerial camera also orthoimage will be generated using LIDAR digital elevation model (DSM). Leica Photogrammetric Suite (LPS) module of Erdsa Imagine 2014 software was utilized for processing. Then the resulted orthoimages from both techniques were mosaicked. The results show that automatic digital surface model DSM that been produced from digital aerial camera method has very high dense photogrammetric 3D point clouds compared to the LIDAR 3D point clouds. It was found that the true orthoimage produced from the second approach is better than the true orthoimage produced from the first approach. The five approaches were tested for classification of the best-orthorectified image mosaic using subpixel based (neural network) and pixel-based ( minimum distance and maximum likelihood).Multicues were extracted such as texture(entropy-mean),Digital elevation model, Digital surface model ,normalized digital surface model (nDSM) and intensity image. The contributions of the individual cues used in the classification have been evaluated. It was found that the best cue integration is intensity (pan) +nDSM+ entropy followed by intensity (pan) +nDSM+mean then intensity image +mean+ entropy after that DSM )image and two texture measures (mean and entropy) followed by the colour image. The integration with height data increases the accuracy. Also, it was found that the integration with entropy texture increases the accuracy. Resulted in fifteen cases of classification it was found that maximum likelihood classifier is the best followed by minimum distance then neural network classifier. We attribute this to the fine resolution of the digital camera image. Subpixel classifier (neural network) is not suitable for classifying aerial digital camera images. 


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 3952 ◽  
Author(s):  
* ◽  
*

Three Dimensional (3D) models are widely used in clinical applications, geosciences, cultural heritage preservation, and engineering; this, together with new emerging needs such as building information modeling (BIM) develop new data capture techniques and devices with a low cost and reduced learning curve that allow for non-specialized users to employ it. This paper presents a simple, self-assembly device for 3D point clouds data capture with an estimated base price under €2500; furthermore, a workflow for the calculations is described that includes a Visual SLAM-photogrammetric threaded algorithm that has been implemented in C++. Another purpose of this work is to validate the proposed system in BIM working environments. To achieve it, in outdoor tests, several 3D point clouds were obtained and the coordinates of 40 points were obtained by means of this device, with data capture distances ranging between 5 to 20 m. Subsequently, those were compared to the coordinates of the same targets measured by a total station. The Euclidean average distance errors and root mean square errors (RMSEs) ranging between 12–46 mm and 8–33 mm respectively, depending on the data capture distance (5–20 m). Furthermore, the proposed system was compared with a commonly used photogrammetric methodology based on Agisoft Metashape software. The results obtained demonstrate that the proposed system satisfies (in each case) the tolerances of ‘level 1’ (51 mm) and ‘level 2’ (13 mm) for point cloud acquisition in urban design and historic documentation, according to the BIM Guide for 3D Imaging (U.S. General Services).


2017 ◽  
Author(s):  
Luisa Griesbaum ◽  
Sabrina Marx ◽  
Bernhard Höfle

Abstract. In recent years, the number of people affected by flooding caused by extreme weather events has increased considerably. In order to provide support in disaster recovery or to develop mitigation plans, accurate flood information is necessary. Particularly pluvial urban floods, characterized by high temporal and spatial variations, are not well documented. This study proposes a new, low-cost approach to determining local flood elevation and inundation depth of buildings based on user-generated flood images. It first applies close-range digital photogrammetry to generate a geo-referenced 3D point cloud. Second, based on estimated camera orientation parameters, the flood level captured in a single flood image is mapped to the previously derived point cloud. The local flood elevation and the building inundation depth can then be derived automatically from the point cloud. The proposed method is carried out once for each of 66 different flood images showing the same building façade. An overall accuracy of 0.05 m  with an uncertainty of ±0.13 m for the derived flood elevation within the area of interest and an accuracy of 0.13 m  ± 0.10 m for the determined building inundation depth is achieved. Our results demonstrate that the proposed method can provide reliable flood information on a local scale using user-generated flood images as input. The approach can thus allow inundation depth maps to be derived even in complex urban environments with relatively high accuracies.


Sign in / Sign up

Export Citation Format

Share Document