CROWD-SOURCED VISUAL DATA COLLECTION FOR MONITORING INDOOR CONSTRUCTION IN 3D

Author(s):  
Fouad Amer ◽  
Mani Golparvar-Fard

Complete and accurate 3D monitoring of indoor construction progress using visual data is challenging. It requires (a) capturing a large number of overlapping images, which is time-consuming and labor-intensive to collect, and (b) processing using Structure from Motion (SfM) algorithms, which can be computationally expensive. To address these inefficiencies, this paper proposes a hybrid SfM-SLAM 3D reconstruction algorithm along with a decentralized data collection workflow to map indoor construction work locations in 3D and any desired frequency. The hybrid 3D reconstruction method is composed of a pipeline of Structure from Motion (SfM) coupled with Multi-View Stereo (MVS) to generate 3D point clouds and a SLAM (Simultaneous Localization and Mapping) algorithm to register the separately formed models together. Our SfM and SLAM pipelines are built on binary Oriented FAST and Rotated BRIEF (ORB) descriptors to tightly couple these two separate reconstruction workflows and enable fast computation. To elaborate the data capture workflow and validate the proposed method, a case study was conducted on a real-world construction site. Compared to state-of-the-art methods, our preliminary results show a decrease in both registration error and processing time, demonstrating the potential of using daily images captured by different trades coupled with weekly walkthrough videos captured by a field engineer for complete 3D visual monitoring of indoor construction operations.

2020 ◽  
Vol 12 (3) ◽  
pp. 351 ◽  
Author(s):  
Seyyed Meghdad Hasheminasab ◽  
Tian Zhou ◽  
Ayman Habib

Acquired imagery by unmanned aerial vehicles (UAVs) has been widely used for three-dimensional (3D) reconstruction/modeling in various digital agriculture applications, such as phenotyping, crop monitoring, and yield prediction. 3D reconstruction from well-textured UAV-based images has matured and the user community has access to several commercial and opensource tools that provide accurate products at a high level of automation. However, in some applications, such as digital agriculture, due to repetitive image patterns, these approaches are not always able to produce reliable/complete products. The main limitation of these techniques is their inability to establish a sufficient number of correctly matched features among overlapping images, causing incomplete and/or inaccurate 3D reconstruction. This paper provides two structure from motion (SfM) strategies, which use trajectory information provided by an onboard survey-grade global navigation satellite system/inertial navigation system (GNSS/INS) and system calibration parameters. The main difference between the proposed strategies is that the first one—denoted as partially GNSS/INS-assisted SfM—implements the four stages of an automated triangulation procedure, namely, imaging matching, relative orientation parameters (ROPs) estimation, exterior orientation parameters (EOPs) recovery, and bundle adjustment (BA). The second strategy— denoted as fully GNSS/INS-assisted SfM—removes the EOPs estimation step while introducing a random sample consensus (RANSAC)-based strategy for removing matching outliers before the BA stage. Both strategies modify the image matching by restricting the search space for conjugate points. They also implement a linear procedure for ROPs’ refinement. Finally, they use the GNSS/INS information in modified collinearity equations for a simpler BA procedure that could be used for refining system calibration parameters. Eight datasets over six agricultural fields are used to evaluate the performance of the developed strategies. In comparison with a traditional SfM framework and Pix4D Mapper Pro, the proposed strategies are able to generate denser and more accurate 3D point clouds as well as orthophotos without any gaps.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Panlong Gu ◽  
Fengyu Zhou ◽  
Dianguo Yu ◽  
Fang Wan ◽  
Wei Wang ◽  
...  

RGBD camera-based VSLAM (Visual Simultaneous Localization and Mapping) algorithm is usually applied to assist robots with real-time mapping. However, due to the limited measuring principle, accuracy, and distance of the equipped camera, this algorithm has typical disadvantages in the large and dynamic scenes with complex lightings, such as poor mapping accuracy, easy loss of robot position, and much cost on computing resources. Regarding these issues, this paper proposes a new method of 3D interior construction, which combines laser radar and an RGBD camera. Meanwhile, it is developed based on the Cartographer laser SLAM algorithm. The proposed method mainly takes two steps. The first step is to do the 3D reconstruction using the Cartographer algorithm and RGBD camera. It firstly applies the Cartographer algorithm to calculate the pose of the RGBD camera and to generate a submap. Then, a real-time 3D point cloud generated by using the RGBD camera is inserted into the submap, and the real-time interior construction is finished. The second step is to improve Cartographer loop-closure quality by the visual loop-closure for the sake of correcting the generated map. Compared with traditional methods in large-scale indoor scenes, the proposed algorithm in this paper shows higher precision, faster speed, and stronger robustness in such contexts, especially with complex light and dynamic objects, respectively.


Author(s):  
Zhengxi Song ◽  
Qi Wu ◽  
Xue Wang ◽  
Qing Wang

Aiming at the issue of incomplete trajectories in the 2D epipolar image of circular light field, this paper proposes a 3D reconstruction method by using 3D Hough transformation. This method computes 3D point clouds by computing the parameters of feature trajectories in 3D image volume. By analyzing the 3D distribution of circular light field trajectories, binary curves in image volume are extracted, and their local orientation are further estimated by the 3D structure tensor. The 3D Hough space generation and the parameter selection method are proposed to the 3D curves detection. The parameters of these curves are converted to 3D point clouds on each view and then merged to final 3D reconstruction. The ambiguity of Hough transformation solution on 2D epipolar image is overcome by the 3D analyzing method. The experiments are carried out on both synthetic and real datasets. The experiment results show that this method can improve the reconstruction performance compared with the state-of-the-art in circular light field.


2021 ◽  
Vol 13 (9) ◽  
pp. 1859
Author(s):  
Xiangyang Liu ◽  
Yaxiong Wang ◽  
Feng Kang ◽  
Yang Yue ◽  
Yongjun Zheng

The characteristic parameters of Citrus grandis var. Longanyou canopies are important when measuring yield and spraying pesticides. However, the feasibility of the canopy reconstruction method based on point clouds has not been confirmed with these canopies. Therefore, LiDAR point cloud data for C. grandis var. Longanyou were obtained to facilitate the management of groves of this species. Then, a cloth simulation filter and European clustering algorithm were used to realize individual canopy extraction. After calculating canopy height and width, canopy reconstruction and volume calculation were realized using six approaches: by a manual method and using five algorithms based on point clouds (convex hull, CH; convex hull by slices; voxel-based, VB; alpha-shape, AS; alpha-shape by slices, ASBS). ASBS is an innovative algorithm that combines AS with slices optimization, and can best approximate the actual canopy shape. Moreover, the CH algorithm had the shortest run time, and the R2 values of VCH, VVB, VAS, and VASBS algorithms were above 0.87. The volume with the highest accuracy was obtained from the ASBS algorithm, and the CH algorithm had the shortest computation time. In addition, a theoretical but preliminarily system suitable for the calculation of the canopy volume of C. grandis var. Longanyou was developed, which provides a theoretical reference for the efficient and accurate realization of future functional modules such as accurate plant protection, orchard obstacle avoidance, and biomass estimation.


Author(s):  
Y. Yang ◽  
S. Song ◽  
C. Toth

Abstract. Place recognition or loop closure is a technique to recognize landmarks and/or scenes visited by a mobile sensing platform previously in an area. The technique is a key function for robustly practicing Simultaneous Localization and Mapping (SLAM) in any environment, including the global positioning system (GPS) denied environment by enabling to perform the global optimization to compensate the drift of dead-reckoning navigation systems. Place recognition in 3D point clouds is a challenging task which is traditionally handled with the aid of other sensors, such as camera and GPS. Unfortunately, visual place recognition techniques may be impacted by changes in illumination and texture, and GPS may perform poorly in urban areas. To mitigate this problem, state-of-art Convolutional Neural Networks (CNNs)-based 3D descriptors may be directly applied to 3D point clouds. In this work, we investigated the performance of different classification strategies utilizing a cutting-edge CNN-based 3D global descriptor (PointNetVLAD) for place recognition task on the Oxford RobotCar dataset.


Sensors ◽  
2019 ◽  
Vol 19 (10) ◽  
pp. 2364 ◽  
Author(s):  
Martina Cignetti ◽  
Danilo Godone ◽  
Aleksandra Wrzesniak ◽  
Daniele Giordan

Structure from Motion (SfM) is a powerful tool to provide 3D point clouds from a sequence of images taken from different remote sensing technologies. The use of this approach for processing images captured from both Remotely Piloted Aerial Vehicles (RPAS), historical aerial photograms, and smartphones, constitutes a valuable solution for the identification and characterization of active landslides. We applied SfM to process all the acquired and available images for the study of the Champlas du Col landslide, a complex slope instability reactivated in spring 2018 in the Piemonte Region (north-western Italy). This last reactivation of the slide, principally due to snow melting at the end of the winter season, interrupted the main road used to reach Sestriere, one of the most famous ski resorts in north-western Italy. We tested how SfM can be applied to process high-resolution multisource datasets by processing: (i) historical aerial photograms collected from five diverse regional flights, (ii) RGB and multi-spectral images acquired by two RPAS, taken in different moments, and (iii) terrestrial sequences of the most representative kinematic elements due to the evolution of the landslide. In addition, we obtained an overall framework of the historical development of the area of interest, and distinguished several generations of landslides. Moreover, an in-depth geomorphological characterization of the Champlas du Col landslide reactivation was done, by testing a cost-effective and rapid methodology based on SfM principles, which is easily repeatable to characterize and investigate active landslides.


2019 ◽  
Vol 11 (10) ◽  
pp. 1204 ◽  
Author(s):  
Yue Pan ◽  
Yiqing Dong ◽  
Dalei Wang ◽  
Airong Chen ◽  
Zhen Ye

Three-dimensional (3D) digital technology is essential to the maintenance and monitoring of cultural heritage sites. In the field of bridge engineering, 3D models generated from point clouds of existing bridges is drawing increasing attention. Currently, the widespread use of the unmanned aerial vehicle (UAV) provides a practical solution for generating 3D point clouds as well as models, which can drastically reduce the manual effort and cost involved. In this study, we present a semi-automated framework for generating structural surface models of heritage bridges. To be specific, we propose to tackle this challenge via a novel top-down method for segmenting main bridge components, combined with rule-based classification, to produce labeled 3D models from UAV photogrammetric point clouds. The point clouds of the heritage bridge are generated from the captured UAV images through the structure-from-motion workflow. A segmentation method is developed based on the supervoxel structure and global graph optimization, which can effectively separate bridge components based on geometric features. Then, recognition by the use of a classification tree and bridge geometry is utilized to recognize different structural elements from the obtained segments. Finally, surface modeling is conducted to generate surface models of the recognized elements. Experiments using two bridges in China demonstrate the potential of the presented structural model reconstruction method using UAV photogrammetry and point cloud processing in 3D digital documentation of heritage bridges. By using given markers, the reconstruction error of point clouds can be as small as 0.4%. Moreover, the precision and recall of segmentation results using testing date are better than 0.8, and a recognition accuracy better than 0.8 is achieved.


2018 ◽  
Vol 30 (4) ◽  
pp. 660-670 ◽  
Author(s):  
Akira Shibata ◽  
Yukari Okumura ◽  
Hiromitsu Fujii ◽  
Atsushi Yamashita ◽  
Hajime Asama ◽  
...  

Structure from motion is a three-dimensional (3D) reconstruction method that uses one camera. However, the absolute scale of objects cannot be reconstructed by the conventional structure from motion method. In our previous studies, to solve this problem by using refraction, we proposed a scale reconstructible structure from motion method. In our measurement system, a refractive plate is fixed in front of a camera and images are captured through this plate. To overcome the geometrical constraints, we derived an extended essential equation by theoretically considering the effect of refraction. By applying this formula to 3D measurements, the absolute scale of an object could be obtained. However, this method was verified only by a simulation under ideal conditions, for example, by not taking into account real phenomena such as noise or occlusion, which are necessarily caused in actual measurements. In this study, to robustly apply this method to an actual measurement with real images, we introduced a novel bundle adjustment method based on the refraction effect. This optimization technique can reduce the 3D reconstruction errors caused by measurement noise in actual scenes. In particular, we propose a new error function considering the effect of refraction. By minimizing the value of this error function, accurate 3D reconstruction results can be obtained. To evaluate the effectiveness of the proposed method, experiments using both simulations and real images were conducted. The results of the simulation show that the proposed method is theoretically accurate. The results of the experiments using real images show that the proposed method is effective for real 3D measurements.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Ryuhei Ando ◽  
Yuko Ozasa ◽  
Wei Guo

The automation of plant phenotyping using 3D imaging techniques is indispensable. However, conventional methods for reconstructing the leaf surface from 3D point clouds have a trade-off between the accuracy of leaf surface reconstruction and the method’s robustness against noise and missing points. To mitigate this trade-off, we developed a leaf surface reconstruction method that reduces the effects of noise and missing points while maintaining surface reconstruction accuracy by capturing two components of the leaf (the shape and distortion of that shape) separately using leaf-specific properties. This separation simplifies leaf surface reconstruction compared with conventional methods while increasing the robustness against noise and missing points. To evaluate the proposed method, we reconstructed the leaf surfaces from 3D point clouds of leaves acquired from two crop species (soybean and sugar beet) and compared the results with those of conventional methods. The result showed that the proposed method robustly reconstructed the leaf surfaces, despite the noise and missing points for two different leaf shapes. To evaluate the stability of the leaf surface reconstructions, we also calculated the leaf surface areas for 14 consecutive days of the target leaves. The result derived from the proposed method showed less variation of values and fewer outliers compared with the conventional methods.


Author(s):  
F.I. Apollonio ◽  
A. Ballabeni ◽  
M. Gaiani ◽  
F. Remondino

Every day new tools and algorithms for automated image processing and 3D reconstruction purposes become available, giving the possibility to process large networks of unoriented and markerless images, delivering sparse 3D point clouds at reasonable processing time. In this paper we evaluate some feature-based methods used to automatically extract the tie points necessary for calibration and orientation procedures, in order to better understand their performances for 3D reconstruction purposes. The performed tests – based on the analysis of the SIFT algorithm and its most used variants – processed some datasets and analysed various interesting parameters and outcomes (e.g. number of oriented cameras, average rays per 3D points, average intersection angles per 3D points, theoretical precision of the computed 3D object coordinates, etc.).


Sign in / Sign up

Export Citation Format

Share Document