scholarly journals Evaluation of feature-based methods for automated network orientation

Author(s):  
F.I. Apollonio ◽  
A. Ballabeni ◽  
M. Gaiani ◽  
F. Remondino

Every day new tools and algorithms for automated image processing and 3D reconstruction purposes become available, giving the possibility to process large networks of unoriented and markerless images, delivering sparse 3D point clouds at reasonable processing time. In this paper we evaluate some feature-based methods used to automatically extract the tie points necessary for calibration and orientation procedures, in order to better understand their performances for 3D reconstruction purposes. The performed tests – based on the analysis of the SIFT algorithm and its most used variants – processed some datasets and analysed various interesting parameters and outcomes (e.g. number of oriented cameras, average rays per 3D points, average intersection angles per 3D points, theoretical precision of the computed 3D object coordinates, etc.).

Author(s):  
D. Abate ◽  
I. Toschi ◽  
C. Sturdy-Colls ◽  
F. Remondino

Crime scene documentation is a fundamental task which has to be undertaken in a fast, accurate and reliable way, highlighting evidence which can be further used for ensuring justice for victims and for guaranteeing the successful prosecution of perpetrators. The main focus of this paper is on the documentation of a typical crime scene and on the rapid recording of any possible contamination that could have influenced its original appearance. A 3D reconstruction of the environment is first generated by processing panoramas acquired with the low-cost Ricoh Theta 360 camera, and further analysed to highlight potentials and limits of this emerging and consumer-grade technology. Then, a methodology is proposed for the rapid recording of changes occurring between the original and the contaminated crime scene. The approach is based on an automatic 3D feature-based data registration, followed by a cloud-to-cloud distance computation, given as input the 3D point clouds generated before and after e.g. the misplacement of evidence. All the algorithms adopted for panoramas pre-processing, photogrammetric 3D reconstruction, 3D geometry registration and analysis, are presented and currently available in open-source or low-cost software solutions.


Author(s):  
Fouad Amer ◽  
Mani Golparvar-Fard

Complete and accurate 3D monitoring of indoor construction progress using visual data is challenging. It requires (a) capturing a large number of overlapping images, which is time-consuming and labor-intensive to collect, and (b) processing using Structure from Motion (SfM) algorithms, which can be computationally expensive. To address these inefficiencies, this paper proposes a hybrid SfM-SLAM 3D reconstruction algorithm along with a decentralized data collection workflow to map indoor construction work locations in 3D and any desired frequency. The hybrid 3D reconstruction method is composed of a pipeline of Structure from Motion (SfM) coupled with Multi-View Stereo (MVS) to generate 3D point clouds and a SLAM (Simultaneous Localization and Mapping) algorithm to register the separately formed models together. Our SfM and SLAM pipelines are built on binary Oriented FAST and Rotated BRIEF (ORB) descriptors to tightly couple these two separate reconstruction workflows and enable fast computation. To elaborate the data capture workflow and validate the proposed method, a case study was conducted on a real-world construction site. Compared to state-of-the-art methods, our preliminary results show a decrease in both registration error and processing time, demonstrating the potential of using daily images captured by different trades coupled with weekly walkthrough videos captured by a field engineer for complete 3D visual monitoring of indoor construction operations.


2020 ◽  
Vol 12 (3) ◽  
pp. 351 ◽  
Author(s):  
Seyyed Meghdad Hasheminasab ◽  
Tian Zhou ◽  
Ayman Habib

Acquired imagery by unmanned aerial vehicles (UAVs) has been widely used for three-dimensional (3D) reconstruction/modeling in various digital agriculture applications, such as phenotyping, crop monitoring, and yield prediction. 3D reconstruction from well-textured UAV-based images has matured and the user community has access to several commercial and opensource tools that provide accurate products at a high level of automation. However, in some applications, such as digital agriculture, due to repetitive image patterns, these approaches are not always able to produce reliable/complete products. The main limitation of these techniques is their inability to establish a sufficient number of correctly matched features among overlapping images, causing incomplete and/or inaccurate 3D reconstruction. This paper provides two structure from motion (SfM) strategies, which use trajectory information provided by an onboard survey-grade global navigation satellite system/inertial navigation system (GNSS/INS) and system calibration parameters. The main difference between the proposed strategies is that the first one—denoted as partially GNSS/INS-assisted SfM—implements the four stages of an automated triangulation procedure, namely, imaging matching, relative orientation parameters (ROPs) estimation, exterior orientation parameters (EOPs) recovery, and bundle adjustment (BA). The second strategy— denoted as fully GNSS/INS-assisted SfM—removes the EOPs estimation step while introducing a random sample consensus (RANSAC)-based strategy for removing matching outliers before the BA stage. Both strategies modify the image matching by restricting the search space for conjugate points. They also implement a linear procedure for ROPs’ refinement. Finally, they use the GNSS/INS information in modified collinearity equations for a simpler BA procedure that could be used for refining system calibration parameters. Eight datasets over six agricultural fields are used to evaluate the performance of the developed strategies. In comparison with a traditional SfM framework and Pix4D Mapper Pro, the proposed strategies are able to generate denser and more accurate 3D point clouds as well as orthophotos without any gaps.


2020 ◽  
Vol 12 (3) ◽  
pp. 401 ◽  
Author(s):  
Ravi ◽  
Habib

LiDAR-based mobile mapping systems (MMS) are rapidly gaining popularity for a multitude of applications due to their ability to provide complete and accurate 3D point clouds for any and every scene of interest. However, an accurate calibration technique for such systems is needed in order to unleash their full potential. In this paper, we propose a fully automated profile-based strategy for the calibration of LiDAR-based MMS. The proposed technique is validated by comparing its accuracy against the expected point positioning accuracy for the point cloud based on the used sensors’ specifications. The proposed strategy was seen to reduce the misalignment between different tracks from approximately 2 to 3 m before calibration down to less than 2 cm after calibration for airborne as well as terrestrial mobile LiDAR mapping systems. In other words, the proposed calibration strategy can converge to correct estimates of mounting parameters, even in cases where the initial estimates are significantly different from the true values. Furthermore, the results from the proposed strategy are also verified by comparing them to those from an existing manually-assisted feature-based calibration strategy. The major contribution of the proposed strategy is its ability to conduct the calibration of airborne and wheel-based mobile systems without any requirement for specially designed targets or features in the surrounding environment. The above claims are validated using experimental results conducted for three different MMS – two airborne and one terrestrial – with one or more LiDAR unit.


Drones ◽  
2020 ◽  
Vol 4 (3) ◽  
pp. 49 ◽  
Author(s):  
Jae Jin Yu ◽  
Dong Woo Kim ◽  
Eun Jung Lee ◽  
Seung Woo Son

The rapid development of drone technologies, such as unmanned aerial systems (UASs) and unmanned aerial vehicles (UAVs), has led to the widespread application of three-dimensional (3D) point clouds and digital surface models (DSMs). Due to the number of UAS technology applications across many fields, studies on the verification of the accuracy of image processing results have increased. In previous studies, the optimal number of ground control points (GCPs) was determined for a specific area of a study site by increasing or decreasing the amount of GCPs. However, these studies were mainly conducted in a single study site, and the results were not compared with those from various study sites. In this study, to determine the optimal number of GCPs for modeling multiple areas, the accuracy of 3D point clouds and DSMs were analyzed in three study sites with different areas according to the number of GCPs. The results showed that the optimal number of GCPs was 12 for small and medium sites (7 and 39 ha) and 18 for the large sites (342 ha) based on the overall accuracy. If these results are used for UAV image processing in the future, accurate modeling will be possible with minimal effort in GCPs.


2021 ◽  
Vol 13 (11) ◽  
pp. 2113
Author(s):  
Tian Gao ◽  
Feiyu Zhu ◽  
Puneet Paul ◽  
Jaspreet Sandhu ◽  
Henry Akrofi Doku ◽  
...  

The use of 3D plant models for high-throughput phenotyping is increasingly becoming a preferred method for many plant science researchers. Numerous camera-based imaging systems and reconstruction algorithms have been developed for the 3D reconstruction of plants. However, it is still challenging to build an imaging system with high-quality results at a low cost. Useful comparative information for existing imaging systems and their improvements is also limited, making it challenging for researchers to make data-based selections. The objective of this study is to explore the possible solutions to address these issues. We introduce two novel systems for plants of various sizes, as well as a pipeline to generate high-quality 3D point clouds and meshes. The higher accuracy and efficiency of the proposed systems make it a potentially valuable tool for enhancing high-throughput phenotyping by integrating 3D traits for increased resolution and measuring traits that are not amenable to 2D imaging approaches. The study shows that the phenotype traits derived from the 3D models are highly correlated with manually measured phenotypic traits (R2 > 0.91). Moreover, we present a systematic analysis of different settings of the imaging systems and a comparison with the traditional system, which provide recommendations for plant scientists to improve the accuracy of 3D construction. In summary, our proposed imaging systems are suggested for 3D reconstruction of plants. Moreover, the analysis results of the different settings in this paper can be used for designing new customized imaging systems and improving their accuracy.


Author(s):  
M. Vlachos ◽  
L. Berger ◽  
R. Mathelier ◽  
P. Agrafiotis ◽  
D. Skarlatos

<p><strong>Abstract.</strong> This paper presents an investigation as to whether and how the selection of the SfM-MVS software affects the 3D reconstruction of submerged archaeological sites. Specifically, Agisoft Photoscan, VisualSFM, SURE, 3D Zephyr and Reality Capture software were used and evaluated according to their performance in 3D reconstruction using specific metrics over the reconstructed underwater scenes. It must be clarified that the scope of this study is not to evaluate specific algorithms or steps that the various software use, but to evaluate the final results and specifically the generated 3D point clouds. To address the above research issues, a dataset from the ancient shipwreck, laying at 45 meters below sea level, is used. The dataset is composed of 19 images having very small camera to object distance (1 meter), and 42 images with higher camera to object distance (3 meters) images. Using a common bundle adjustment for all 61 images, a reference point cloud resulted from the lower dataset is used to compare it with the point clouds of the higher dataset generated using the different photogrammetric packages. Following that, a comparison regarding the number of total points, cloud to cloud distances, surface roughness, surface density and a combined 3D metric was done to evaluate and see which one performed the best.</p>


2018 ◽  
Author(s):  
◽  
Raphael Viguier

3D reconstruction is one of the most challenging but also most necessary part of computer vision. It is generally applied everywhere, from remote sensing to medical imaging and multimedia. Wide Area Motion Imagery is a field that has gained traction over the recent years. It consists in using an airborne large field of view sensor to cover a typically over a square kilometer area for each captured image. This is particularly valuable data for analysis but the amount of information is overwhelming for any human analyst. Algorithms to efficiently and automatically extract information are therefore needed and 3D reconstruction plays a critical part in it, along with detection and tracking. This dissertation work presents novel reconstruction algorithms to compute a 3D probabilistic space, a set of experiments to efficiently extract photo realistic 3D point clouds and a range of transformations for possible applications of the generated 3D data to filtering, data compression and mapping. The algorithms have been successfully tested on our own datasets provided by Transparent Sky and this thesis work also proposes methods to evaluate accuracy, completeness and photo-consistency. The generated data has been successfully used to improve detection and tracking performances, and allows data compression and extrapolation by generating synthetic images from new point of view, and data augmentation with the inferred occlusion areas.


Sign in / Sign up

Export Citation Format

Share Document