scholarly journals GNSS/INS-Assisted Structure from Motion Strategies for UAV-Based Imagery over Mechanized Agricultural Fields

2020 ◽  
Vol 12 (3) ◽  
pp. 351 ◽  
Author(s):  
Seyyed Meghdad Hasheminasab ◽  
Tian Zhou ◽  
Ayman Habib

Acquired imagery by unmanned aerial vehicles (UAVs) has been widely used for three-dimensional (3D) reconstruction/modeling in various digital agriculture applications, such as phenotyping, crop monitoring, and yield prediction. 3D reconstruction from well-textured UAV-based images has matured and the user community has access to several commercial and opensource tools that provide accurate products at a high level of automation. However, in some applications, such as digital agriculture, due to repetitive image patterns, these approaches are not always able to produce reliable/complete products. The main limitation of these techniques is their inability to establish a sufficient number of correctly matched features among overlapping images, causing incomplete and/or inaccurate 3D reconstruction. This paper provides two structure from motion (SfM) strategies, which use trajectory information provided by an onboard survey-grade global navigation satellite system/inertial navigation system (GNSS/INS) and system calibration parameters. The main difference between the proposed strategies is that the first one—denoted as partially GNSS/INS-assisted SfM—implements the four stages of an automated triangulation procedure, namely, imaging matching, relative orientation parameters (ROPs) estimation, exterior orientation parameters (EOPs) recovery, and bundle adjustment (BA). The second strategy— denoted as fully GNSS/INS-assisted SfM—removes the EOPs estimation step while introducing a random sample consensus (RANSAC)-based strategy for removing matching outliers before the BA stage. Both strategies modify the image matching by restricting the search space for conjugate points. They also implement a linear procedure for ROPs’ refinement. Finally, they use the GNSS/INS information in modified collinearity equations for a simpler BA procedure that could be used for refining system calibration parameters. Eight datasets over six agricultural fields are used to evaluate the performance of the developed strategies. In comparison with a traditional SfM framework and Pix4D Mapper Pro, the proposed strategies are able to generate denser and more accurate 3D point clouds as well as orthophotos without any gaps.

Author(s):  
Fouad Amer ◽  
Mani Golparvar-Fard

Complete and accurate 3D monitoring of indoor construction progress using visual data is challenging. It requires (a) capturing a large number of overlapping images, which is time-consuming and labor-intensive to collect, and (b) processing using Structure from Motion (SfM) algorithms, which can be computationally expensive. To address these inefficiencies, this paper proposes a hybrid SfM-SLAM 3D reconstruction algorithm along with a decentralized data collection workflow to map indoor construction work locations in 3D and any desired frequency. The hybrid 3D reconstruction method is composed of a pipeline of Structure from Motion (SfM) coupled with Multi-View Stereo (MVS) to generate 3D point clouds and a SLAM (Simultaneous Localization and Mapping) algorithm to register the separately formed models together. Our SfM and SLAM pipelines are built on binary Oriented FAST and Rotated BRIEF (ORB) descriptors to tightly couple these two separate reconstruction workflows and enable fast computation. To elaborate the data capture workflow and validate the proposed method, a case study was conducted on a real-world construction site. Compared to state-of-the-art methods, our preliminary results show a decrease in both registration error and processing time, demonstrating the potential of using daily images captured by different trades coupled with weekly walkthrough videos captured by a field engineer for complete 3D visual monitoring of indoor construction operations.


Author(s):  
M. Vlachos ◽  
L. Berger ◽  
R. Mathelier ◽  
P. Agrafiotis ◽  
D. Skarlatos

<p><strong>Abstract.</strong> This paper presents an investigation as to whether and how the selection of the SfM-MVS software affects the 3D reconstruction of submerged archaeological sites. Specifically, Agisoft Photoscan, VisualSFM, SURE, 3D Zephyr and Reality Capture software were used and evaluated according to their performance in 3D reconstruction using specific metrics over the reconstructed underwater scenes. It must be clarified that the scope of this study is not to evaluate specific algorithms or steps that the various software use, but to evaluate the final results and specifically the generated 3D point clouds. To address the above research issues, a dataset from the ancient shipwreck, laying at 45 meters below sea level, is used. The dataset is composed of 19 images having very small camera to object distance (1 meter), and 42 images with higher camera to object distance (3 meters) images. Using a common bundle adjustment for all 61 images, a reference point cloud resulted from the lower dataset is used to compare it with the point clouds of the higher dataset generated using the different photogrammetric packages. Following that, a comparison regarding the number of total points, cloud to cloud distances, surface roughness, surface density and a combined 3D metric was done to evaluate and see which one performed the best.</p>


Author(s):  
A. Al-Rawabdeh ◽  
H. Al-Gurrani ◽  
K. Al-Durgham ◽  
I. Detchev ◽  
F. He ◽  
...  

Landslides are among the major threats to urban landscape and manmade infrastructure. They often cause economic losses, property damages, and loss of lives. Temporal monitoring data of landslides from different epochs empowers the evaluation of landslide progression. Alignment of overlapping surfaces from two or more epochs is crucial for the proper analysis of landslide dynamics. The traditional methods for point-cloud-based landslide monitoring rely on using a variation of the Iterative Closest Point (ICP) registration procedure to align any reconstructed surfaces from different epochs to a common reference frame. However, sometimes the ICP-based registration can fail or may not provide sufficient accuracy. For example, point clouds from different epochs might fit to local minima due to lack of geometrical variability within the data. Also, manual interaction is required to exclude any non-stable areas from the registration process. In this paper, a robust image-based registration method is introduced for the simultaneous evaluation of all registration parameters. This includes the Interior Orientation Parameters (IOPs) of the camera and the Exterior Orientation Parameters (EOPs) of the involved images from all available observation epochs via a bundle block adjustment with self-calibration. Next, a semi-global dense matching technique is implemented to generate dense 3D point clouds for each epoch using the images captured in a particular epoch separately. The normal distances between any two consecutive point clouds can then be readily computed, because the point clouds are already effectively co-registered. A low-cost DJI Phantom II Unmanned Aerial Vehicle (UAV) was customised and used in this research for temporal data collection over an active soil creep area in Lethbridge, Alberta, Canada. The customisation included adding a GPS logger and a Large-Field-Of-View (LFOV) action camera which facilitated capturing high-resolution geo-tagged images in two epochs over the period of one year (i.e., May 2014 and May 2015). Note that due to the coarse accuracy of the on-board GPS receiver (e.g., +/- 5-10 m) the geo-tagged positions of the images were only used as initial values in the bundle block adjustment. Normal distances, signifying detected changes, varying from 20 cm to 4 m were identified between the two epochs. The accuracy of the co-registered surfaces was estimated by comparing non-active patches within the monitored area of interest. Since these non-active sub-areas are stationary, the computed normal distances should theoretically be close to zero. The quality control of the registration results showed that the average normal distance was approximately 4 cm, which is within the noise level of the reconstructed surfaces.


Author(s):  
S. Rhee ◽  
T. Kim

3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.


Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2364 ◽  
Author(s):  
Martina Cignetti ◽  
Danilo Godone ◽  
Aleksandra Wrzesniak ◽  
Daniele Giordan

Structure from Motion (SfM) is a powerful tool to provide 3D point clouds from a sequence of images taken from different remote sensing technologies. The use of this approach for processing images captured from both Remotely Piloted Aerial Vehicles (RPAS), historical aerial photograms, and smartphones, constitutes a valuable solution for the identification and characterization of active landslides. We applied SfM to process all the acquired and available images for the study of the Champlas du Col landslide, a complex slope instability reactivated in spring 2018 in the Piemonte Region (north-western Italy). This last reactivation of the slide, principally due to snow melting at the end of the winter season, interrupted the main road used to reach Sestriere, one of the most famous ski resorts in north-western Italy. We tested how SfM can be applied to process high-resolution multisource datasets by processing: (i) historical aerial photograms collected from five diverse regional flights, (ii) RGB and multi-spectral images acquired by two RPAS, taken in different moments, and (iii) terrestrial sequences of the most representative kinematic elements due to the evolution of the landslide. In addition, we obtained an overall framework of the historical development of the area of interest, and distinguished several generations of landslides. Moreover, an in-depth geomorphological characterization of the Champlas du Col landslide reactivation was done, by testing a cost-effective and rapid methodology based on SfM principles, which is easily repeatable to characterize and investigate active landslides.


2019 ◽  
Vol 13 (2) ◽  
pp. 105-134 ◽  
Author(s):  
Mohammad Omidalizarandi ◽  
Boris Kargoll ◽  
Jens-André Paffenholz ◽  
Ingo Neumann

Abstract In the last two decades, the integration of a terrestrial laser scanner (TLS) and digital photogrammetry, besides other sensors integration, has received considerable attention for deformation monitoring of natural or man-made structures. Typically, a TLS is used for an area-based deformation analysis. A high-resolution digital camera may be attached on top of the TLS to increase the accuracy and completeness of deformation analysis by optimally combining points or line features extracted both from three-dimensional (3D) point clouds and captured images at different epochs of time. For this purpose, the external calibration parameters between the TLS and digital camera needs to be determined precisely. The camera calibration and internal TLS calibration are commonly carried out in advance in the laboratory environments. The focus of this research is to highly accurately and robustly estimate the external calibration parameters between the fused sensors using signalised target points. The observables are the image measurements, the 3D point clouds, and the horizontal angle reading of a TLS. In addition, laser tracker observations are used for the purpose of validation. The functional models are determined based on the space resection in photogrammetry using the collinearity condition equations, the 3D Helmert transformation and the constraint equation, which are solved in a rigorous bundle adjustment procedure. Three different adjustment procedures are developed and implemented: (1) an expectation maximization (EM) algorithm to solve a Gauss-Helmert model (GHM) with grouped t-distributed random deviations, (2) a novel EM algorithm to solve a corresponding quasi-Gauss-Markov model (qGMM) with t-distributed pseudo-misclosures, and (3) a classical least-squares procedure to solve the GHM with variance components and outlier removal. The comparison of the results demonstrates the precise, reliable, accurate and robust estimation of the parameters in particular by the second and third procedures in comparison to the first one. In addition, the results show that the second procedure is computationally more efficient than the other two.


2020 ◽  
Vol 12 (14) ◽  
pp. 2268
Author(s):  
Tian Zhou ◽  
Seyyed Meghdad Hasheminasab ◽  
Radhika Ravi ◽  
Ayman Habib

Unmanned aerial vehicles (UAVs) are quickly emerging as a popular platform for 3D reconstruction/modeling in various applications such as precision agriculture, coastal monitoring, and emergency management. For such applications, LiDAR and frame cameras are the two most commonly used sensors for 3D mapping of the object space. For example, point clouds for the area of interest can be directly derived from LiDAR sensors onboard UAVs equipped with integrated global navigation satellite systems and inertial navigation systems (GNSS/INS). Imagery-based mapping, on the other hand, is considered to be a cost-effective and practical option and is often conducted by generating point clouds and orthophotos using structure from motion (SfM) techniques. Mapping with photogrammetric approaches requires accurate camera interior orientation parameters (IOPs), especially when direct georeferencing is utilized. Most state-of-the-art approaches for determining/refining camera IOPs depend on ground control points (GCPs). However, establishing GCPs is expensive and labor-intensive, and more importantly, the distribution and number of GCPs are usually less than optimal to provide adequate control for determining and/or refining camera IOPs. Moreover, consumer-grade cameras with unstable IOPs have been widely used for mapping applications. Therefore, in such scenarios, where frequent camera calibration or IOP refinement is required, GCP-based approaches are impractical. To eliminate the need for GCPs, this study uses LiDAR data as a reference surface to perform in situ refinement of camera IOPs. The proposed refinement strategy is conducted in three main steps. An image-based sparse point cloud is first generated via a GNSS/INS-assisted SfM strategy. Then, LiDAR points corresponding to the resultant image-based sparse point cloud are identified through an iterative plane fitting approach and are referred to as LiDAR control points (LCPs). Finally, IOPs of the utilized camera are refined through a GNSS/INS-assisted bundle adjustment procedure using LCPs. Seven datasets over two study sites with a variety of geomorphic features are used to evaluate the performance of the developed strategy. The results illustrate the ability of the proposed approach to achieve an object space absolute accuracy of 3–5 cm (i.e., 5–10 times the ground sampling distance) at a 41 m flying height.


2018 ◽  
Vol 30 (4) ◽  
pp. 660-670 ◽  
Author(s):  
Akira Shibata ◽  
Yukari Okumura ◽  
Hiromitsu Fujii ◽  
Atsushi Yamashita ◽  
Hajime Asama ◽  
...  

Structure from motion is a three-dimensional (3D) reconstruction method that uses one camera. However, the absolute scale of objects cannot be reconstructed by the conventional structure from motion method. In our previous studies, to solve this problem by using refraction, we proposed a scale reconstructible structure from motion method. In our measurement system, a refractive plate is fixed in front of a camera and images are captured through this plate. To overcome the geometrical constraints, we derived an extended essential equation by theoretically considering the effect of refraction. By applying this formula to 3D measurements, the absolute scale of an object could be obtained. However, this method was verified only by a simulation under ideal conditions, for example, by not taking into account real phenomena such as noise or occlusion, which are necessarily caused in actual measurements. In this study, to robustly apply this method to an actual measurement with real images, we introduced a novel bundle adjustment method based on the refraction effect. This optimization technique can reduce the 3D reconstruction errors caused by measurement noise in actual scenes. In particular, we propose a new error function considering the effect of refraction. By minimizing the value of this error function, accurate 3D reconstruction results can be obtained. To evaluate the effectiveness of the proposed method, experiments using both simulations and real images were conducted. The results of the simulation show that the proposed method is theoretically accurate. The results of the experiments using real images show that the proposed method is effective for real 3D measurements.


Author(s):  
F.I. Apollonio ◽  
A. Ballabeni ◽  
M. Gaiani ◽  
F. Remondino

Every day new tools and algorithms for automated image processing and 3D reconstruction purposes become available, giving the possibility to process large networks of unoriented and markerless images, delivering sparse 3D point clouds at reasonable processing time. In this paper we evaluate some feature-based methods used to automatically extract the tie points necessary for calibration and orientation procedures, in order to better understand their performances for 3D reconstruction purposes. The performed tests &ndash; based on the analysis of the SIFT algorithm and its most used variants &ndash; processed some datasets and analysed various interesting parameters and outcomes (e.g. number of oriented cameras, average rays per 3D points, average intersection angles per 3D points, theoretical precision of the computed 3D object coordinates, etc.).


Author(s):  
I.-C. Lee ◽  
F. Tsai

A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. <br><br> In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. <br><br> The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The results presented in this paper demonstrate the potential of using panoramic images to generate 3D point clouds and 3D models. However, it is currently a manual and labor-intensive process. A research is being carried out to Increase the degree of automation of these procedures.


Sign in / Sign up

Export Citation Format

Share Document