scholarly journals Structure from Motion Multisource Application for Landslide Characterization and Monitoring: The Champlas du Col Case Study, Sestriere, North-Western Italy

Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2364 ◽  
Author(s):  
Martina Cignetti ◽  
Danilo Godone ◽  
Aleksandra Wrzesniak ◽  
Daniele Giordan

Structure from Motion (SfM) is a powerful tool to provide 3D point clouds from a sequence of images taken from different remote sensing technologies. The use of this approach for processing images captured from both Remotely Piloted Aerial Vehicles (RPAS), historical aerial photograms, and smartphones, constitutes a valuable solution for the identification and characterization of active landslides. We applied SfM to process all the acquired and available images for the study of the Champlas du Col landslide, a complex slope instability reactivated in spring 2018 in the Piemonte Region (north-western Italy). This last reactivation of the slide, principally due to snow melting at the end of the winter season, interrupted the main road used to reach Sestriere, one of the most famous ski resorts in north-western Italy. We tested how SfM can be applied to process high-resolution multisource datasets by processing: (i) historical aerial photograms collected from five diverse regional flights, (ii) RGB and multi-spectral images acquired by two RPAS, taken in different moments, and (iii) terrestrial sequences of the most representative kinematic elements due to the evolution of the landslide. In addition, we obtained an overall framework of the historical development of the area of interest, and distinguished several generations of landslides. Moreover, an in-depth geomorphological characterization of the Champlas du Col landslide reactivation was done, by testing a cost-effective and rapid methodology based on SfM principles, which is easily repeatable to characterize and investigate active landslides.

Author(s):  
A. Al-Rawabdeh ◽  
H. Al-Gurrani ◽  
K. Al-Durgham ◽  
I. Detchev ◽  
F. He ◽  
...  

Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs over the period of one year (i.e., May 2014 and May 2015). Note that due to the coarse accuracy of the on-board GPS receiver (e.g., +/- 5-10 m) the geo-tagged positions of the images were only used as initial values in the bundle block adjustment. Normal distances, signifying detected changes, varying from 20 cm to 4 m were identified between the two epochs. The accuracy of the co-registered surfaces was estimated by comparing non-active patches within the monitored area of interest. Since these non-active sub-areas are stationary, the computed normal distances should theoretically be close to zero. The quality control of the registration results showed that the average normal distance was approximately 4 cm, which is within the noise level of the reconstructed surfaces.


Author(s):  
E. Sánchez-García ◽  
A. Balaguer-Beser ◽  
R. Taborda ◽  
J. E. Pardo-Pascual

Beach and fluvial systems are highly dynamic environments, being constantly modified by the action of different natural and anthropic phenomena. To understand their behaviour and to support a sustainable management of these fragile environments, it is very important to have access to cost-effective tools. These methods should be supported on cutting-edge technologies that allow monitoring the dynamics of the natural systems with high periodicity and repeatability at different temporal and spatial scales instead the tedious and expensive field-work that has been carried out up to date. The work herein presented analyses the potential of terrestrial photogrammetry to describe beach morphology. Data processing and generation of high resolution 3D point clouds and derived DEMs is supported by the commercial Agisoft PhotoScan. Model validation is done by comparison of the differences in the elevation among the photogrammetric point cloud and the GPS data along different beach profiles. Results obtained denote the potential that the photogrammetry 3D modelling has to monitor morphological changes and natural events getting differences between 6 and 25 cm. Furthermore, the usefulness of these techniques to control the layout of a fluvial system is tested by the performance of some modeling essays in a hydraulic pilot channel.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4569
Author(s):  
Joan R. Rosell-Polo ◽  
Eduard Gregorio ◽  
Jordi Llorens

In this editorial, we provide an overview of the content of the special issue on “Terrestrial Laser Scanning”. The aim of this Special Issue is to bring together innovative developments and applications of terrestrial laser scanning (TLS), understood in a broad sense. Thus, although most contributions mainly involve the use of laser-based systems, other alternative technologies that also allow for obtaining 3D point clouds for the measurement and the 3D characterization of terrestrial targets, such as photogrammetry, are also considered. The 15 published contributions are mainly focused on the applications of TLS to the following three topics: TLS performance and point cloud processing, applications to civil engineering, and applications to plant characterization.


2019 ◽  
Vol 259 ◽  
pp. 105131 ◽  
Author(s):  
Xiaojun Li ◽  
Ziyang Chen ◽  
Jianqin Chen ◽  
Hehua Zhu

Author(s):  
Fouad Amer ◽  
Mani Golparvar-Fard

Complete and accurate 3D monitoring of indoor construction progress using visual data is challenging. It requires (a) capturing a large number of overlapping images, which is time-consuming and labor-intensive to collect, and (b) processing using Structure from Motion (SfM) algorithms, which can be computationally expensive. To address these inefficiencies, this paper proposes a hybrid SfM-SLAM 3D reconstruction algorithm along with a decentralized data collection workflow to map indoor construction work locations in 3D and any desired frequency. The hybrid 3D reconstruction method is composed of a pipeline of Structure from Motion (SfM) coupled with Multi-View Stereo (MVS) to generate 3D point clouds and a SLAM (Simultaneous Localization and Mapping) algorithm to register the separately formed models together. Our SfM and SLAM pipelines are built on binary Oriented FAST and Rotated BRIEF (ORB) descriptors to tightly couple these two separate reconstruction workflows and enable fast computation. To elaborate the data capture workflow and validate the proposed method, a case study was conducted on a real-world construction site. Compared to state-of-the-art methods, our preliminary results show a decrease in both registration error and processing time, demonstrating the potential of using daily images captured by different trades coupled with weekly walkthrough videos captured by a field engineer for complete 3D visual monitoring of indoor construction operations.


Author(s):  
A. Al-Rawabdeh ◽  
H. Al-Gurrani ◽  
K. Al-Durgham ◽  
I. Detchev ◽  
F. He ◽  
...  

Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs over the period of one year (i.e., May 2014 and May 2015). Note that due to the coarse accuracy of the on-board GPS receiver (e.g., +/- 5-10 m) the geo-tagged positions of the images were only used as initial values in the bundle block adjustment. Normal distances, signifying detected changes, varying from 20 cm to 4 m were identified between the two epochs. The accuracy of the co-registered surfaces was estimated by comparing non-active patches within the monitored area of interest. Since these non-active sub-areas are stationary, the computed normal distances should theoretically be close to zero. The quality control of the registration results showed that the average normal distance was approximately 4 cm, which is within the noise level of the reconstructed surfaces.


Author(s):  
J. Elseberg ◽  
D. Borrmann ◽  
J. Schauer ◽  
A. Nüchter ◽  
D. Koriath ◽  
...  

Motivated by the increasing need of rapid characterization of environments in 3D, we designed and built a sensor skid that automates the work of an operator of terrestrial laser scanners. The system combines terrestrial laser scanning with kinematic laser scanning and uses a novel semi-rigid SLAMmethod. It enables us to digitize factory environments without the need to stop production. The acquired 3D point clouds are precise and suitable to detect objects that collide with items moved along the production line.


2020 ◽  
Vol 12 (3) ◽  
pp. 351 ◽  
Author(s):  
Seyyed Meghdad Hasheminasab ◽  
Tian Zhou ◽  
Ayman Habib

Acquired imagery by unmanned aerial vehicles (UAVs) has been widely used for three-dimensional (3D) reconstruction/modeling in various digital agriculture applications, such as phenotyping, crop monitoring, and yield prediction. 3D reconstruction from well-textured UAV-based images has matured and the user community has access to several commercial and opensource tools that provide accurate products at a high level of automation. However, in some applications, such as digital agriculture, due to repetitive image patterns, these approaches are not always able to produce reliable/complete products. The main limitation of these techniques is their inability to establish a sufficient number of correctly matched features among overlapping images, causing incomplete and/or inaccurate 3D reconstruction. This paper provides two structure from motion (SfM) strategies, which use trajectory information provided by an onboard survey-grade global navigation satellite system/inertial navigation system (GNSS/INS) and system calibration parameters. The main difference between the proposed strategies is that the first one—denoted as partially GNSS/INS-assisted SfM—implements the four stages of an automated triangulation procedure, namely, imaging matching, relative orientation parameters (ROPs) estimation, exterior orientation parameters (EOPs) recovery, and bundle adjustment (BA). The second strategy— denoted as fully GNSS/INS-assisted SfM—removes the EOPs estimation step while introducing a random sample consensus (RANSAC)-based strategy for removing matching outliers before the BA stage. Both strategies modify the image matching by restricting the search space for conjugate points. They also implement a linear procedure for ROPs’ refinement. Finally, they use the GNSS/INS information in modified collinearity equations for a simpler BA procedure that could be used for refining system calibration parameters. Eight datasets over six agricultural fields are used to evaluate the performance of the developed strategies. In comparison with a traditional SfM framework and Pix4D Mapper Pro, the proposed strategies are able to generate denser and more accurate 3D point clouds as well as orthophotos without any gaps.


2012 ◽  
Vol 591-593 ◽  
pp. 1265-1268
Author(s):  
Zi Ming Xiong ◽  
Gang Wan ◽  
Xue Feng Cao

Recent progress in structure-from-motion (SfM) has led to robust techniques that can operate in extremely general conditions. However, a limitation of SfM is that the scene can only be recovered up to a similarity transformation. We address the problem of automatically aligning 3D point clouds from SfM reconstructions to orthographic images. We extract feature lines from 3D point clouds, and project the feature lines onto the ground plane to create a 2D feature lines. So we reduce this alignment problem to a 2D line to 2D line alignment(match), and a novel technique for the automatic feature lines matching is presented in this paper.


2017 ◽  
Author(s):  
Luisa Griesbaum ◽  
Sabrina Marx ◽  
Bernhard Höfle

Abstract. In recent years, the number of people affected by flooding caused by extreme weather events has increased considerably. In order to provide support in disaster recovery or to develop mitigation plans, accurate flood information is necessary. Particularly pluvial urban floods, characterized by high temporal and spatial variations, are not well documented. This study proposes a new, low-cost approach to determining local flood elevation and inundation depth of buildings based on user-generated flood images. It first applies close-range digital photogrammetry to generate a geo-referenced 3D point cloud. Second, based on estimated camera orientation parameters, the flood level captured in a single flood image is mapped to the previously derived point cloud. The local flood elevation and the building inundation depth can then be derived automatically from the point cloud. The proposed method is carried out once for each of 66 different flood images showing the same building façade. An overall accuracy of 0.05 m  with an uncertainty of ±0.13 m for the derived flood elevation within the area of interest and an accuracy of 0.13 m  ± 0.10 m for the determined building inundation depth is achieved. Our results demonstrate that the proposed method can provide reliable flood information on a local scale using user-generated flood images as input. The approach can thus allow inundation depth maps to be derived even in complex urban environments with relatively high accuracies.


Sign in / Sign up

Export Citation Format

Share Document