Assessing lean and positional error of individual mature Douglas-fir (Pseudotsuga menziesii) trees using active and passive sensors

2020 ◽  
Vol 50 (11) ◽  
pp. 1228-1243
Author(s):  
Cory G. Garms ◽  
Chase H. Simpson ◽  
Christopher E. Parrish ◽  
Michael G. Wing ◽  
Bogdan M. Strimbu

There is a growing demand for point cloud data that can produce reliable single-tree measurements. The most common platforms for obtaining such data are unmanned aircraft systems with passive sensors (UAS), unmanned aircraft equipped with aerial lidar scanners (ALS), and mobile lidar scanners (MLS). Our objectives were to compare the capabilities of the UAS, ALS, and MLS to locate treetops and stems and to estimate tree lean. The platforms were used to produce overlapping point clouds of a mature Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) stand, from which 273 trees were manually identified. Control trees were used to test tree detection accuracy of four algorithms and the number of stems detectable using each platform. Tree lean was calculated in two ways: using the stem location near the canopy and using the treetop. The treetops were detected more accurately from ALS and UAS clouds than from MLS, but the MLS outperformed ALS and UAS in stem detection. The platform influenced treetop detection accuracy, whereas the algorithms did not. The height estimates from the ALS and MLS were correlated (R2 = 0.96), but the MLS height estimates were unreliable, especially as distance from the scanner increased. The lean estimates using the stem locations or treetop locations produced analogous distributions for all three platforms.

1998 ◽  
Vol 28 (10) ◽  
pp. 1509-1517 ◽  
Author(s):  
O García

Conventional top height estimates are biased if the area of the sample plot differs from that on which the definition is based. Sources of bias include a sampling selection effect and spatial autocorrelation. The problem was studied in relation to the use of data sets with varying spatial detail for modelling Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) plantation growth. Improved top height estimators, developed taking into account the selection effect, eliminated the bias. Bias was reduced, but not eliminated completely, when the estimators were tested using more highly autocorrelated eucalypt data.


2020 ◽  
Vol 473 ◽  
pp. 118284 ◽  
Author(s):  
Samuel Grubinger ◽  
Nicholas C. Coops ◽  
Michael Stoehr ◽  
Yousry A. El-Kassaby ◽  
Arko Lucieer ◽  
...  

2019 ◽  
Vol 11 (22) ◽  
pp. 2715 ◽  
Author(s):  
Chuyen Nguyen ◽  
Michael J. Starek ◽  
Philippe Tissot ◽  
James Gibeaut

Dense three-dimensional (3D) point cloud data sets generated by Terrestrial Laser Scanning (TLS) and Unmanned Aircraft System based Structure-from-Motion (UAS-SfM) photogrammetry have different characteristics and provide different representations of the underlying land cover. While there are differences, a common challenge associated with these technologies is how to best take advantage of these large data sets, often several hundred million points, to efficiently extract relevant information. Given their size and complexity, the data sets cannot be efficiently and consistently separated into homogeneous features without the use of automated segmentation algorithms. This research aims to evaluate the performance and generalizability of an unsupervised clustering method, originally developed for segmentation of TLS point cloud data in marshes, by extending it to UAS-SfM point clouds. The combination of two sets of features are extracted from both datasets: “core” features that can be extracted from any 3D point cloud and “sensor specific” features unique to the imaging modality. Comparisons of segmented results based on producer’s and user’s accuracies allow for identifying the advantages and limitations of each dataset and determining the generalization of the clustering method. The producer’s accuracies suggest that UAS-SfM (94.7%) better represents tidal flats, while TLS (99.5%) is slightly more suitable for vegetated areas. The users’ accuracies suggest that UAS-SfM outperforms TLS in vegetated areas with 98.6% of those points identified as vegetation actually falling in vegetated areas whereas TLS outperforms UAS-SfM in tidal flat areas with 99.2% user accuracy. Results demonstrate that the clustering method initially developed for TLS point cloud data transfers well to UAS-SfM point cloud data to enable consistent and accurate segmentation of marsh land cover via an unsupervised method.


Author(s):  
Cory Glenn Garms ◽  
Bogdan Strimbu

The value of Douglas-fir (Pseudotsuga menziesii), which is the predominant commercial species in the Pacific Northwest, depends on tree verticality; trees with same dimensions can differ substantially in value due to lean. The objective of this study was to assess the impact of tree leaning on estimation of stem dimensions using high density terrestrial mobile lidar point clouds. We estimated lean with two metrics: the horizontal distance between stem centers at 1.3m and 18m, and the mean of seven successive lean angles along the tree bole (at 1, 3, 5, 7, 10, 12, and 15m). For modeling, we used four existing taper equations and three existing volume equations. For trees leaning >2º, we enhanced the existing volume models by including lean as a predictor. Because lean estimates depend on the distribution and number of points describing the stem, we found that including the distance from scanner to tree improved the computed volume. When DBH was replaced with diameter at heights between 7 - 10m, the volume models for leaning trees improved significantly, whereas the vertical trees had favorable results with heights between 5-15m. Our study suggests the inclusion of lean magnitude improves estimates of stem volume when lean is >2°.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Wanyi Zhang ◽  
Xiuhua Fu ◽  
Wei Li

3D object detection based on point cloud data in the unmanned driving scene has always been a research hotspot in unmanned driving sensing technology. With the development and maturity of deep neural networks technology, the method of using neural network to detect three-dimensional object target begins to show great advantages. The experimental results show that the mismatch between anchor and training samples would affect the detection accuracy, but it has not been well solved. The contributions of this paper are as follows. For the first time, deformable convolution is introduced into the point cloud object detection network, which enhances the adaptability of the network to vehicles with different directions and shapes. Secondly, a new generation method of anchor in RPN is proposed, which can effectively prevent the mismatching between the anchor and ground truth and remove the angle classification loss in the loss function. Compared with the state-of-the-art method, the AP and AOS of the detection results are improved.


Author(s):  
Jiayong Yu ◽  
Longchen Ma ◽  
Maoyi Tian, ◽  
Xiushan Lu

The unmanned aerial vehicle (UAV)-mounted mobile LiDAR system (ULS) is widely used for geomatics owing to its efficient data acquisition and convenient operation. However, due to limited carrying capacity of a UAV, sensors integrated in the ULS should be small and lightweight, which results in decrease in the density of the collected scanning points. This affects registration between image data and point cloud data. To address this issue, the authors propose a method for registering and fusing ULS sequence images and laser point clouds, wherein they convert the problem of registering point cloud data and image data into a problem of matching feature points between the two images. First, a point cloud is selected to produce an intensity image. Subsequently, the corresponding feature points of the intensity image and the optical image are matched, and exterior orientation parameters are solved using a collinear equation based on image position and orientation. Finally, the sequence images are fused with the laser point cloud, based on the Global Navigation Satellite System (GNSS) time index of the optical image, to generate a true color point cloud. The experimental results show the higher registration accuracy and fusion speed of the proposed method, thereby demonstrating its accuracy and effectiveness.


2021 ◽  
Vol 13 (11) ◽  
pp. 2195
Author(s):  
Shiming Li ◽  
Xuming Ge ◽  
Shengfu Li ◽  
Bo Xu ◽  
Zhendong Wang

Today, mobile laser scanning and oblique photogrammetry are two standard urban remote sensing acquisition methods, and the cross-source point-cloud data obtained using these methods have significant differences and complementarity. Accurate co-registration can make up for the limitations of a single data source, but many existing registration methods face critical challenges. Therefore, in this paper, we propose a systematic incremental registration method that can successfully register MLS and photogrammetric point clouds in the presence of a large number of missing data, large variations in point density, and scale differences. The robustness of this method is due to its elimination of noise in the extracted linear features and its 2D incremental registration strategy. There are three main contributions of our work: (1) the development of an end-to-end automatic cross-source point-cloud registration method; (2) a way to effectively extract the linear feature and restore the scale; and (3) an incremental registration strategy that simplifies the complex registration process. The experimental results show that this method can successfully achieve cross-source data registration, while other methods have difficulty obtaining satisfactory registration results efficiently. Moreover, this method can be extended to more point-cloud sources.


2021 ◽  
Vol 10 (5) ◽  
pp. 345
Author(s):  
Konstantinos Chaidas ◽  
George Tataris ◽  
Nikolaos Soulakellis

In a post-earthquake scenario, the semantic enrichment of 3D building models with seismic damage is crucial from the perspective of disaster management. This paper aims to present the methodology and the results for the Level of Detail 3 (LOD3) building modelling (after an earthquake) with the enrichment of the semantics of the seismic damage based on the European Macroseismic Scale (EMS-98). The study area is the Vrisa traditional settlement on the island of Lesvos, Greece, which was affected by a devastating earthquake of Mw = 6.3 on 12 June 2017. The applied methodology consists of the following steps: (a) unmanned aircraft systems (UAS) nadir and oblique images are acquired and photogrammetrically processed for 3D point cloud generation, (b) 3D building models are created based on 3D point clouds and (c) 3D building models are transformed into a LOD3 City Geography Markup Language (CityGML) standard with enriched semantics of the related seismic damage of every part of the building (walls, roof, etc.). The results show that in following this methodology, CityGML LOD3 models can be generated and enriched with buildings’ seismic damage. These models can assist in the decision-making process during the recovery phase of a settlement as well as be the basis for its monitoring over time. Finally, these models can contribute to the estimation of the reconstruction cost of the buildings.


2021 ◽  
Vol 13 (8) ◽  
pp. 1584
Author(s):  
Pedro Martín-Lerones ◽  
David Olmedo ◽  
Ana López-Vidal ◽  
Jaime Gómez-García-Bermejo ◽  
Eduardo Zalama

As the basis for analysis and management of heritage assets, 3D laser scanning and photogrammetric 3D reconstruction have been probed as adequate techniques for point cloud data acquisition. The European Directive 2014/24/EU imposes BIM Level 2 for government centrally procured projects as a collaborative process of producing federated discipline-specific models. Although BIM software resources are intensified and increasingly growing, distinct specifications for heritage (H-BIM) are essential to driving particular processes and tools to efficiency shifting from point clouds to meaningful information ready to be exchanged using non-proprietary formats, such as Industry Foundation Classes (IFC). This paper details a procedure for processing enriched 3D point clouds into the REVIT software package due to its worldwide popularity and how closely it integrates with the BIM concept. The procedure will be additionally supported by a tailored plug-in to make high-quality 3D digital survey datasets usable together with 2D imaging, enhancing the capability to depict contextualized important graphical data to properly planning conservation actions. As a practical example, a 2D/3D enhanced combination is worked to accurately include into a BIM project, the length, orientation, and width of a big crack on the walls of the Castle of Torrelobatón (Spain) as a representative heritage building.


2021 ◽  
Vol 11 (13) ◽  
pp. 6016
Author(s):  
Jinsoo Kim ◽  
Jeongho Cho

For autonomous vehicles, it is critical to be aware of the driving environment to avoid collisions and drive safely. The recent evolution of convolutional neural networks has contributed significantly to accelerating the development of object detection techniques that enable autonomous vehicles to handle rapid changes in various driving environments. However, collisions in an autonomous driving environment can still occur due to undetected obstacles and various perception problems, particularly occlusion. Thus, we propose a robust object detection algorithm for environments in which objects are truncated or occluded by employing RGB image and light detection and ranging (LiDAR) bird’s eye view (BEV) representations. This structure combines independent detection results obtained in parallel through “you only look once” networks using an RGB image and a height map converted from the BEV representations of LiDAR’s point cloud data (PCD). The region proposal of an object is determined via non-maximum suppression, which suppresses the bounding boxes of adjacent regions. A performance evaluation of the proposed scheme was performed using the KITTI vision benchmark suite dataset. The results demonstrate the detection accuracy in the case of integration of PCD BEV representations is superior to when only an RGB camera is used. In addition, robustness is improved by significantly enhancing detection accuracy even when the target objects are partially occluded when viewed from the front, which demonstrates that the proposed algorithm outperforms the conventional RGB-based model.


Sign in / Sign up

Export Citation Format

Share Document