scholarly journals Forest Structural Complexity Tool—An Open Source, Fully-Automated Tool for Measuring Forest Point Clouds

2021 ◽  
Vol 13 (22) ◽  
pp. 4677
Author(s):  
Sean Krisanski ◽  
Mohammad Sadegh Taskhiri ◽  
Susana Gonzalez Aracil ◽  
David Herries ◽  
Allie Muneri ◽  
...  

Forest mensuration remains critical in managing our forests sustainably, however, capturing such measurements remains costly, time-consuming and provides minimal amounts of information such as diameter at breast height (DBH), location, and height. Plot scale remote sensing techniques show great promise in extracting detailed forest measurements rapidly and cheaply, however, they have been held back from large-scale implementation due to the complex and time-consuming workflows required to utilize them. This work is focused on describing and evaluating an approach to create a robust, sensor-agnostic and fully automated forest point cloud measurement tool called the Forest Structural Complexity Tool (FSCT). The performance of FSCT is evaluated using 49 forest plots of terrestrial laser scanned (TLS) point clouds and 7022 destructively sampled manual diameter measurements of the stems. FSCT was able to match 5141 of the reference diameter measurements fully automatically with mean, median and root mean squared errors (RMSE) of 0.032 m, 0.02 m, and 0.103 m respectively. A video demonstration is also provided to qualitatively demonstrate the diversity of point cloud datasets that the tool is capable of measuring. FSCT is provided as open source, with the goal of enabling plot scale remote sensing techniques to replace most structural forest mensuration in research and industry. Future work on this project will seek to make incremental improvements to this methodology to further improve the reliability and accuracy of this tool in most high-resolution forest point clouds.

Author(s):  
W. Ostrowski ◽  
M. Pilarska ◽  
J. Charyton ◽  
K. Bakuła

Creating 3D building models in large scale is becoming more popular and finds many applications. Nowadays, a wide term “3D building models” can be applied to several types of products: well-known CityGML solid models (available on few Levels of Detail), which are mainly generated from Airborne Laser Scanning (ALS) data, as well as 3D mesh models that can be created from both nadir and oblique aerial images. City authorities and national mapping agencies are interested in obtaining the 3D building models. Apart from the completeness of the models, the accuracy aspect is also important. Final accuracy of a building model depends on various factors (accuracy of the source data, complexity of the roof shapes, etc.). In this paper the methodology of inspection of dataset containing 3D models is presented. The proposed approach check all building in dataset with comparison to ALS point clouds testing both: accuracy and level of details. Using analysis of statistical parameters for normal heights for reference point cloud and tested planes and segmentation of point cloud provides the tool that can indicate which building and which roof plane in do not fulfill requirement of model accuracy and detail correctness. Proposed method was tested on two datasets: solid and mesh model.


2019 ◽  
Vol 12 (1) ◽  
pp. 112 ◽  
Author(s):  
Dong Lin ◽  
Lutz Bannehr ◽  
Christoph Ulrich ◽  
Hans-Gerd Maas

Thermal imagery is widely used in various fields of remote sensing. In this study, a novel processing scheme is developed to process the data acquired by the oblique airborne photogrammetric system AOS-Tx8 consisting of four thermal cameras and four RGB cameras with the goal of large-scale area thermal attribute mapping. In order to merge 3D RGB data and 3D thermal data, registration is conducted in four steps: First, thermal and RGB point clouds are generated independently by applying structure from motion (SfM) photogrammetry to both the thermal and RGB imagery. Next, a coarse point cloud registration is performed by the support of georeferencing data (global positioning system, GPS). Subsequently, a fine point cloud registration is conducted by octree-based iterative closest point (ICP). Finally, three different texture mapping strategies are compared. Experimental results showed that the global image pose refinement outperforms the other two strategies at registration accuracy between thermal imagery and RGB point cloud. Potential building thermal leakages in large areas can be fast detected in the generated texture mapping results. Furthermore, a combination of the proposed workflow and the oblique airborne system allows for a detailed thermal analysis of building roofs and facades.


2020 ◽  
Vol 12 (1) ◽  
pp. 178 ◽  
Author(s):  
Jinming Zhang ◽  
Xiangyun Hu ◽  
Hengming Dai ◽  
ShenRun Qu

It is difficult to extract a digital elevation model (DEM) from an airborne laser scanning (ALS) point cloud in a forest area because of the irregular and uneven distribution of ground and vegetation points. Machine learning, especially deep learning methods, has shown powerful feature extraction in accomplishing point cloud classification. However, most of the existing deep learning frameworks, such as PointNet, dynamic graph convolutional neural network (DGCNN), and SparseConvNet, cannot consider the particularity of ALS point clouds. For large-scene laser point clouds, the current data preprocessing methods are mostly based on random sampling, which is not suitable for DEM extraction tasks. In this study, we propose a novel data sampling algorithm for the data preparation of patch-based training and classification named T-Sampling. T-Sampling uses the set of the lowest points in a certain area as basic points with other points added to supplement it, which can guarantee the integrity of the terrain in the sampling area. In the learning part, we propose a new convolution model based on terrain named Tin-EdgeConv that fully considers the spatial relationship between ground and non-ground points when constructing a directed graph. We design a new network based on Tin-EdgeConv to extract local features and use PointNet architecture to extract global context information. Finally, we combine this information effectively with a designed attention fusion module. These aspects are important in achieving high classification accuracy. We evaluate the proposed method by using large-scale data from forest areas. Results show that our method is more accurate than existing algorithms.


2020 ◽  
Vol 12 (11) ◽  
pp. 1875 ◽  
Author(s):  
Jingwei Zhu ◽  
Joachim Gehrung ◽  
Rong Huang ◽  
Björn Borgmann ◽  
Zhenghao Sun ◽  
...  

In the past decade, a vast amount of strategies, methods, and algorithms have been developed to explore the semantic interpretation of 3D point clouds for extracting desirable information. To assess the performance of the developed algorithms or methods, public standard benchmark datasets should invariably be introduced and used, which serve as an indicator and ruler in the evaluation and comparison. In this work, we introduce and present large-scale Mobile LiDAR point clouds acquired at the city campus of the Technical University of Munich, which have been manually annotated and can be used for the evaluation of related algorithms and methods for semantic point cloud interpretation. We created three datasets from a measurement campaign conducted in April 2016, including a benchmark dataset for semantic labeling, test data for instance segmentation, and test data for annotated single 360 ° laser scans. These datasets cover an urban area of approximately 1 km long roadways and include more than 40 million annotated points with eight classes of objects labeled. Moreover, experiments were carried out with results from several baseline methods compared and analyzed, revealing the quality of this dataset and its effectiveness when using it for performance evaluation.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6815
Author(s):  
Cheng Yi ◽  
Dening Lu ◽  
Qian Xie ◽  
Jinxuan Xu ◽  
Jun Wang

Global inspection of large-scale tunnels is a fundamental yet challenging task to ensure the structural stability of tunnels and driving safety. Advanced LiDAR scanners, which sample tunnels into 3D point clouds, are making their debut in the Tunnel Deformation Inspection (TDI). However, the acquired raw point clouds inevitably possess noticeable occlusions, missing areas, and noise/outliers. Considering the tunnel as a geometrical sweeping feature, we propose an effective tunnel deformation inspection algorithm by extracting the global spatial axis from the poor-quality raw point cloud. Essentially, we convert tunnel axis extraction into an iterative fitting optimization problem. Specifically, given the scanned raw point cloud of a tunnel, the initial design axis is sampled to generate a series of normal planes within the corresponding Frenet frame, followed by intersecting those planes with the tunnel point cloud to yield a sequence of cross sections. By fitting cross sections with circles, the fitted circle centers are approximated with a B-Spline curve, which is considered as an updated axis. The procedure of “circle fitting and B-SPline approximation” repeats iteratively until convergency, that is, the distance of each fitted circle center to the current axis is smaller than a given threshold. By this means, the spatial axis of the tunnel can be accurately obtained. Subsequently, according to the practical mechanism of tunnel deformation, we design a segmentation approach to partition cross sections into meaningful pieces, based on which various inspection parameters can be automatically computed regarding to tunnel deformation. A variety of practical experiments have demonstrated the feasibility and effectiveness of our inspection method.


2022 ◽  
Vol 14 (2) ◽  
pp. 367
Author(s):  
Zhen Zheng ◽  
Bingting Zha ◽  
Yu Zhou ◽  
Jinbo Huang ◽  
Youshi Xuchen ◽  
...  

This paper proposes a single-stage adaptive multi-scale noise filtering algorithm for point clouds, based on feature information, which aims to mitigate the fact that the current laser point cloud noise filtering algorithm has difficulty quickly completing the single-stage adaptive filtering of multi-scale noise. The feature information from each point of the point cloud is obtained based on the efficient k-dimensional (k-d) tree data structure and amended normal vector estimation methods, and the adaptive threshold is used to divide the point cloud into large-scale noise, a feature-rich region, and a flat region to reduce the computational time. The large-scale noise is removed directly, the feature-rich and flat regions are filtered via improved bilateral filtering algorithm and weighted average filtering algorithm based on grey relational analysis, respectively. Simulation results show that the proposed algorithm performs better than the state-of-art comparison algorithms. It was, thus, verified that the algorithm proposed in this paper can quickly and adaptively (i) filter out large-scale noise, (ii) smooth small-scale noise, and (iii) effectively maintain the geometric features of the point cloud. The developed algorithm provides research thought for filtering pre-processing methods applicable in 3D measurements, remote sensing, and target recognition based on point clouds.


Author(s):  
K. Bittner ◽  
P. d’Angelo ◽  
M. Körner ◽  
P. Reinartz

<p><strong>Abstract.</strong> Three-dimensional building reconstruction from remote sensing imagery is one of the most difficult and important 3D modeling problems for complex urban environments. The main data sources provided the digital representation of the Earths surface and related natural, cultural, and man-made objects of the urban areas in remote sensing are the <i>digital surface models (DSMs)</i>. The DSMs can be obtained either by <i>light detection and ranging (LIDAR)</i>, SAR interferometry or from stereo images. Our approach relies on automatic global 3D building shape refinement from stereo DSMs using deep learning techniques. This refinement is necessary as the DSMs, which are extracted from image matching point clouds, suffer from occlusions, outliers, and noise. Though most previous works have shown promising results for building modeling, this topic remains an open research area. We present a new methodology which not only generates images with continuous values representing the elevation models but, at the same time, enhances the 3D object shapes, buildings in our case. Mainly, we train a <i>conditional generative adversarial network (cGAN)</i> to generate accurate LIDAR-like DSM height images from the noisy stereo DSM input. The obtained results demonstrate the strong potential of creating large areas remote sensing depth images where the buildings exhibit better-quality shapes and roof forms.</p>


Author(s):  
L. Gézero ◽  
C. Antunes

In the last few years, LiDAR sensors installed in terrestrial vehicles have been revealed as an efficient method to collect very dense 3D georeferenced information. The possibility of creating very dense point clouds representing the surface surrounding the sensor, at a given moment, in a very fast, detailed and easy way, shows the potential of this technology to be used for cartography and digital terrain models production in large scale. However, there are still some limitations associated with the use of this technology. When several acquisitions of the same area with the same device, are made, differences between the clouds can be observed. The range of that differences can go from few centimetres to some several tens of centimetres, mainly in urban and high vegetation areas where the occultation of the GNSS system introduces a degradation of the georeferenced trajectory. Along this article a different method point cloud registration is proposed. In addition to the efficiency and speed of execution, the main advantages of the method are related to the fact that the adjustment is continuously made over the trajectory, based on the GPS time. The process is fully automatic and only information recorded in the standard LAS files is used, without the need for any auxiliary information, in particular regarding the trajectory.


Author(s):  
T. Shinohara ◽  
H. Xiu ◽  
M. Matsuoka

Abstract. This study introduces a novel image to a 3D point-cloud translation method with a conditional generative adversarial network that creates a large-scale 3D point cloud. This can generate supervised point clouds observed via airborne LiDAR from aerial images. The network is composed of an encoder to produce latent features of input images, generator to translate latent features to fake point clouds, and discriminator to classify false or real point clouds. The encoder is a pre-trained ResNet; to overcome the difficulty of generating 3D point clouds in an outdoor scene, we use a FoldingNet with features from ResNet. After a fixed number of iterations, our generator can produce fake point clouds that correspond to the input image. Experimental results show that our network can learn and generate certain point clouds using the data from the 2018 IEEE GRSS Data Fusion Contest.


Author(s):  
G. Vacca

<p><strong>Abstract.</strong> In the photogrammetric process of the 3D reconstruction of an object or a building, multi-image orientation is one of the most important tasks that often include simultaneous camera calibration. The accuracy of image orientation and camera calibration significantly affects the quality and accuracy of all subsequent photogrammetric processes, such as determining the spatial coordinates of individual points or 3D modeling. In the context of artificial vision, the full-field analysis procedure is used, which leads to the so-called Strcture from Motion (SfM), which includes the simultaneous determination of the camera's internal and external orientation parameters and the 3D model. The procedures were designed and developed by means of a photogrammetric system, but the greatest development and innovation of these procedures originated from the computer vision from the late 90s, together with the SfM method. The reconstructions on this method have been useful for visualization purposes and not for photogrammetry and mapping. Thanks to advances in computer technology and computer performance, a large number of images can be automatically oriented in a coordinate system arbitrarily defined by different algorithms, often available in open source software (VisualSFM, Bundler, PMVS2, CMVS, etc.) or in the form of Web services (Microsoft Photosynth, Autodesk 123D Catch, My3DScanner, etc.). However, it is important to obtain an assessment of the accuracy and reliability of these automated procedures. This paper presents the results obtained from the dome low close range photogrammetric surveys and processed with some open source software using the Structure from Motion approach: VisualSfM, OpenDroneMap (ODM) and Regard3D. Photogrammetric surveys have also been processed with the Photoscan commercial software by Agisoft.</p><p>For the photogrammetric survey we used the digital camera Canon EOS M3 (24.2 Megapixel, pixel size 3.72&amp;thinsp;mm). We also surveyed the dome with the Faro Focus 3D TLS. Only one scan was carried out, from ground level, at a resolution setting of &amp;frac14; with 3x quality, corresponding to a resolution of 7&amp;thinsp;mm / 10&amp;thinsp;m. Both TLS point cloud and Photoscan point cloud were used as a reference to validate the point clouds coming from VisualSFM, OpenDroneMap and Regards3D. The validation was done using the Cloud Compare open source software.</p>


Sign in / Sign up

Export Citation Format

Share Document