Using semi-global matching point clouds to estimate growing stock at the plot and stand levels: application for a broadleaf-dominated forest in central Europe

2015 ◽  
Vol 45 (1) ◽  
pp. 111-123 ◽  
Author(s):  
Christoph Stepper ◽  
Christoph Straub ◽  
Hans Pretzsch

Dense image-based point clouds have great potential to accurately assess forest attributes such as growing stock. The objective of this study was to combine height and spectral information obtained from UltraCamXp stereo images to model the growing stock in a highly structured broadleaf-dominated forest (77.5 km2) in southern Germany. We used semi-global matching (SGM) to generate a dense point cloud and subtracted elevation values obtained from airborne laser scanner (ALS) data to compute canopy height. Sixty-seven explanatory variables were derived from the point cloud and an orthoimage for use in the model. Two different approaches — the linear regression model (lm) and the random forests model (rf) — were tested. We investigated the impact that varying amounts of training data had on model performance. Plot data from a previously acquired set of 1875 inventory plots was systematically eliminated to form three progressively less dense subsets of 937, 461, and 226 inventory plots. Model evaluation at the plot level (size: 500 m2) yielded relative root mean squared errors (RMSEs) ranging from 31.27% to 35.61% for lm and from 30.92% to 36.02% for rf. At the stand level (mean stand size: 32 ha), RMSEs from 14.76% to 15.73% for lm and from 13.87% to 14.99% for rf were achieved. Therefore, similar results were obtained from both modeling approaches. The reduction in the number of inventory plots did not considerably affect the precision. Our findings underline the potential for aerial stereo imagery in combination with ALS-based terrain heights to support forest inventory and management.

Author(s):  
K. Thoeni ◽  
A. Giacomini ◽  
R. Murtagh ◽  
E. Kniest

This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.


Drones ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 145
Author(s):  
Alessandra Capolupo

A proper classification of 3D point clouds allows fully exploiting data potentiality in assessing and preserving cultural heritage. Point cloud classification workflow is commonly based on the selection and extraction of respective geometric features. Although several research activities have investigated the impact of geometric features on classification outcomes accuracy, only a few works focused on their accuracy and reliability. This paper investigates the accuracy of 3D point cloud geometric features through a statistical analysis based on their corresponding eigenvalues and covariance with the aim of exploiting their effectiveness for cultural heritage classification. The proposed approach was separately applied on two high-quality 3D point clouds of the All Saints’ Monastery of Cuti (Bari, Southern Italy), generated using two competing survey techniques: Remotely Piloted Aircraft System (RPAS) Structure from Motion (SfM) and Multi View Stereo (MVS) techniques and Terrestrial Laser Scanner (TLS). Point cloud compatibility was guaranteed through re-alignment and co-registration of data. The geometric features accuracy obtained by adopting the RPAS digital photogrammetric and TLS models was consequently analyzed and presented. Lastly, a discussion on convergences and divergences of these results is also provided.


Author(s):  
Y. Xie ◽  
K. Schindler ◽  
J. Tian ◽  
X. X. Zhu

Abstract. Deep learning models achieve excellent semantic segmentation results for airborne laser scanning (ALS) point clouds, if sufficient training data are provided. Increasing amounts of annotated data are becoming publicly available thanks to contributors from all over the world. However, models trained on a specific dataset typically exhibit poor performance on other datasets. I.e., there are significant domain shifts, as data captured in different environments or by distinct sensors have different distributions. In this work, we study this domain shift and potential strategies to mitigate it, using two popular ALS datasets: the ISPRS Vaihingen benchmark from Germany and the LASDU benchmark from China. We compare different training strategies for cross-city ALS point cloud semantic segmentation. In our experiments, we analyse three factors that may lead to domain shift and affect the learning: point cloud density, LiDAR intensity, and the role of data augmentation. Moreover, we evaluate a well-known standard method of domain adaptation, deep CORAL (Sun and Saenko, 2016). In our experiments, adapting the point cloud density and appropriate data augmentation both help to reduce the domain gap and improve segmentation accuracy. On the contrary, intensity features can bring an improvement within a dataset, but deteriorate the generalisation across datasets. Deep CORAL does not further improve the accuracy over the simple adaptation of density and data augmentation, although it can mitigate the impact of improperly chosen point density, intensity features, and further dataset biases like lack of diversity.


2021 ◽  
Vol 13 (13) ◽  
pp. 2494
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

T-splines have recently been introduced to represent objects of arbitrary shapes using a smaller number of control points than the conventional non-uniform rational B-splines (NURBS) or B-spline representatizons in computer-aided design, computer graphics and reverse engineering. They are flexible in representing complex surface shapes and economic in terms of parameters as they enable local refinement. This property is a great advantage when dense, scattered and noisy point clouds are approximated using least squares fitting, such as those from a terrestrial laser scanner (TLS). Unfortunately, when it comes to assessing the goodness of fit of the surface approximation with a real dataset, only a noisy point cloud can be approximated: (i) a low root mean squared error (RMSE) can be linked with an overfitting, i.e., a fitting of the noise, and should be correspondingly avoided, and (ii) a high RMSE is synonymous with a lack of details. To address the challenge of judging the approximation, the reference surface should be entirely known: this can be solved by printing a mathematically defined T-splines reference surface in three dimensions (3D) and modeling the artefacts induced by the 3D printing. Once scanned under different configurations, it is possible to assess the goodness of fit of the approximation for a noisy and potentially gappy point cloud and compare it with the traditional but less flexible NURBS. The advantages of T-splines local refinement open the door for further applications within a geodetic context such as rigorous statistical testing of deformation. Two different scans from a slightly deformed object were approximated; we found that more than 40% of the computational time could be saved without affecting the goodness of fit of the surface approximation by using the same mesh for the two epochs.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 884
Author(s):  
Chia-Ming Tsai ◽  
Yi-Horng Lai ◽  
Yung-Da Sun ◽  
Yu-Jen Chung ◽  
Jau-Woei Perng

Numerous sensors can obtain images or point cloud data on land, however, the rapid attenuation of electromagnetic signals and the lack of light in water have been observed to restrict sensing functions. This study expands the utilization of two- and three-dimensional detection technologies in underwater applications to detect abandoned tires. A three-dimensional acoustic sensor, the BV5000, is used in this study to collect underwater point cloud data. Some pre-processing steps are proposed to remove noise and the seabed from raw data. Point clouds are then processed to obtain two data types: a 2D image and a 3D point cloud. Deep learning methods with different dimensions are used to train the models. In the two-dimensional method, the point cloud is transferred into a bird’s eye view image. The Faster R-CNN and YOLOv3 network architectures are used to detect tires. Meanwhile, in the three-dimensional method, the point cloud associated with a tire is cut out from the raw data and is used as training data. The PointNet and PointConv network architectures are then used for tire classification. The results show that both approaches provide good accuracy.


Author(s):  
Guillermo Oliver ◽  
Pablo Gil ◽  
Jose F. Gomez ◽  
Fernando Torres

AbstractIn this paper, we present a robotic workcell for task automation in footwear manufacturing such as sole digitization, glue dispensing, and sole manipulation from different places within the factory plant. We aim to make progress towards shoe industry 4.0. To achieve it, we have implemented a novel sole grasping method, compatible with soles of different shapes, sizes, and materials, by exploiting the particular characteristics of these objects. Our proposal is able to work well with low density point clouds from a single RGBD camera and also with dense point clouds obtained from a laser scanner digitizer. The method computes antipodal grasping points from visual data in both cases and it does not require a previous recognition of sole. It relies on sole contour extraction using concave hulls and measuring the curvature on contour areas. Our method was tested both in a simulated environment and in real conditions of manufacturing at INESCOP facilities, processing 20 soles with different sizes and characteristics. Grasps were performed in two different configurations, obtaining an average score of 97.5% of successful real grasps for soles without heel made with materials of low or medium flexibility. In both cases, the grasping method was tested without carrying out tactile control throughout the task.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 201
Author(s):  
Michael Bekele Maru ◽  
Donghwan Lee ◽  
Kassahun Demissie Tola ◽  
Seunghee Park

Modeling a structure in the virtual world using three-dimensional (3D) information enhances our understanding, while also aiding in the visualization, of how a structure reacts to any disturbance. Generally, 3D point clouds are used for determining structural behavioral changes. Light detection and ranging (LiDAR) is one of the crucial ways by which a 3D point cloud dataset can be generated. Additionally, 3D cameras are commonly used to develop a point cloud containing many points on the external surface of an object around it. The main objective of this study was to compare the performance of optical sensors, namely a depth camera (DC) and terrestrial laser scanner (TLS) in estimating structural deflection. We also utilized bilateral filtering techniques, which are commonly used in image processing, on the point cloud data for enhancing their accuracy and increasing the application prospects of these sensors in structure health monitoring. The results from these sensors were validated by comparing them with the outputs from a linear variable differential transformer sensor, which was mounted on the beam during an indoor experiment. The results showed that the datasets obtained from both the sensors were acceptable for nominal deflections of 3 mm and above because the error range was less than ±10%. However, the result obtained from the TLS were better than those obtained from the DC.


2019 ◽  
Vol 93 (3) ◽  
pp. 411-429 ◽  
Author(s):  
Maria Immacolata Marzulli ◽  
Pasi Raumonen ◽  
Roberto Greco ◽  
Manuela Persia ◽  
Patrizia Tartarino

Abstract Methods for the three-dimensional (3D) reconstruction of forest trees have been suggested for data from active and passive sensors. Laser scanner technologies have become popular in the last few years, despite their high costs. Since the improvements in photogrammetric algorithms (e.g. structure from motion—SfM), photographs have become a new low-cost source of 3D point clouds. In this study, we use images captured by a smartphone camera to calculate dense point clouds of a forest plot using SfM. Eighteen point clouds were produced by changing the densification parameters (Image scale, Point density, Minimum number of matches) in order to investigate their influence on the quality of the point clouds produced. In order to estimate diameter at breast height (d.b.h.) and stem volumes, we developed an automatic method that extracts the stems from the point cloud and then models them with cylinders. The results show that Image scale is the most influential parameter in terms of identifying and extracting trees from the point clouds. The best performance with cylinder modelling from point clouds compared to field data had an RMSE of 1.9 cm and 0.094 m3, for d.b.h. and volume, respectively. Thus, for forest management and planning purposes, it is possible to use our photogrammetric and modelling methods to measure d.b.h., stem volume and possibly other forest inventory metrics, rapidly and without felling trees. The proposed methodology significantly reduces working time in the field, using ‘non-professional’ instruments and automating estimates of dendrometric parameters.


Author(s):  
M. Franzini ◽  
V. Casella ◽  
P. Marchese ◽  
M. Marini ◽  
G. Della Porta ◽  
...  

Abstract. Recent years showed a gradual transition from terrestrial to aerial survey thanks to the development of UAV and sensors for it. Many sectors benefited by this change among which geological one; drones are flexible, cost-efficient and can support outcrops surveying in many difficult situations such as inaccessible steep and high rock faces. The experiences acquired in terrestrial survey, with total stations, GNSS or terrestrial laser scanner (TLS), are not yet completely transferred to UAV acquisition. Hence, quality comparisons are still needed. The present paper is framed in this perspective aiming to evaluate the quality of the point clouds generated by an UAV in a geological context; data analysis was conducted comparing the UAV product with the homologous acquired with a TLS system. Exploiting modern semantic classification, based on eigenfeatures and support vector machine (SVM), the two point clouds were compared in terms of density and mutual distance. The UAV survey proves its usefulness in this situation with a uniform density distribution in the whole area and producing a point cloud with a quality comparable with the more traditional TLS systems.


Author(s):  
Song Song ◽  
Youpeng Xu ◽  
Jiali Wang ◽  
Jinkang Du ◽  
Jianxin Zhang ◽  
...  

Distributed/semi-distributed models are considered to be sensitive to the spatial resolution of the data input. In this paper, we take a small catchment in high urbanized Yangtze River Delta, Qinhuai catchment as study area, to analyze the impact of spatial resolution of precipitation and the potential evapotranspiration (PET) on the long-term runoff and flood runoff process. The data source includes the TRMM precipitation data, FEWS download PET data, and the interpolated metrological station data. GIS/RS technique was used to collect and pre-process the geographical, precipitation and PET series, which were then served as the input of CREST (Coupled Routing and Excess Storage) model to simulate the runoff process. The results clearly showed that, the CREST model is applicable to the Qinhuai catchment; the spatial resolution of precipitation had strong influence on the modelled runoff results and the metrological precipitation data cannot be substituted by the TRMM data in small catchment; the CREST model was not sensitive to the spatial resolution of the PET data, while the estimation fourmula of the PET data was correlated with the model quality. This paper focused on the small urbanized catchment, suggesting the influential explanatory variables for the model performance, and providing reliable reference for the study in similar area.


Sign in / Sign up

Export Citation Format

Share Document