scholarly journals Estimating Forest Structure from UAV-Mounted LiDAR Point Cloud Using Machine Learning

2021 ◽  
Vol 13 (3) ◽  
pp. 352
Author(s):  
Romain Neuville ◽  
Jordan Steven Bates ◽  
François Jonard

Monitoring the structure of forest stands is of high importance for forest managers to help them in maintaining ecosystem services. For that purpose, Unmanned Aerial Vehicles (UAVs) open new prospects, especially in combination with Light Detection and Ranging (LiDAR) technology. Indeed, the shorter distance from the Earth’s surface significantly increases the point density beneath the canopy, thus offering new possibilities for the extraction of the underlying semantics. For example, tree stems can now be captured with sufficient detail, which is a gateway to accurately locating trees and directly retrieving metrics—e.g., the Diameter at Breast Height (DBH). Current practices usually require numerous site-specific parameters, which may preclude their use when applied beyond their initial application context. To overcome this shortcoming, the machine learning Hierarchical Density-Based Spatial Clustering of Application of Noise (HDBSCAN) clustering algorithm was further improved and implemented to segment tree stems. Afterwards, Principal Component Analysis (PCA) was applied to extract tree stem orientation for subsequent DBH estimation. This workflow was then validated using LiDAR point clouds collected in a temperate deciduous closed-canopy forest stand during the leaf-on and leaf-off seasons, along with multiple scanning angle ranges. The results show that the proposed methodology can correctly detect up to 82% of tree stems (with a precision of 98%) during the leaf-off season and have a Maximum Scanning Angle Range (MSAR) of 75 degrees, without having to set up any site-specific parameters for the segmentation procedure. In the future, our method could then minimize the omission and commission errors when initially detecting trees, along with assisting further tree metrics retrieval. Finally, this research shows that, under the study conditions, the point density within an approximately 1.3-meter height above the ground remains low within closed-canopy forest stands even during the leaf-off season, thus restricting the accurate estimation of the DBH. As a result, autonomous UAVs that can both fly above and under the canopy provide a clear opportunity to achieve this purpose.

2020 ◽  
Vol 12 (22) ◽  
pp. 3726
Author(s):  
María Sánchez-Aparicio ◽  
Susana Del Pozo ◽  
Jose Antonio Martín-Jiménez ◽  
Enrique González-González ◽  
Paula Andrés-Anaya ◽  
...  

The use of LiDAR (Light Detection and Ranging) data for the definition of the 3D geometry of roofs has been widely exploited in recent years for its posterior application in the field of solar energy. Point density in LiDAR data is an essential characteristic to be taken into account for the accurate estimation of roof geometry: area, orientation and slope. This paper presents a comparative study between LiDAR data of different point densities: 0.5, 1, 2 and 14 points/m2 for the measurement of the area of roofs of residential and industrial buildings. The data used for the study are the LiDAR data freely available by the Spanish Institute of Geography (IGN), which is offered according to the INSPIRE Directive. The results obtained show different behaviors for roofs with an area below and over 200 m2. While the use of low-density point clouds (0.5 point/m2) presents significant errors in the estimation of the area, the use of point clouds with higher density (1 or 2 points/m2) implies a great improvement in the area results, with no significant difference among them. The use of high-density point clouds (14 points/m2) also implies an improvement of the results, although the accuracy does not increase in the same ratio as the increase in density regarding 1 or 2 points/m2. Thus, the conclusion reached is that the geometrical characterization of roofs requires data acquisition with point density of 1 or 2 points/m2, and that higher point densities do not improve the results with the same intensity as they increase computation time.


2020 ◽  
Vol 15 ◽  
Author(s):  
Shuwen Zhang ◽  
Qiang Su ◽  
Qin Chen

Abstract: Major animal diseases pose a great threat to animal husbandry and human beings. With the deepening of globalization and the abundance of data resources, the prediction and analysis of animal diseases by using big data are becoming more and more important. The focus of machine learning is to make computers learn how to learn from data and use the learned experience to analyze and predict. Firstly, this paper introduces the animal epidemic situation and machine learning. Then it briefly introduces the application of machine learning in animal disease analysis and prediction. Machine learning is mainly divided into supervised learning and unsupervised learning. Supervised learning includes support vector machines, naive bayes, decision trees, random forests, logistic regression, artificial neural networks, deep learning, and AdaBoost. Unsupervised learning has maximum expectation algorithm, principal component analysis hierarchical clustering algorithm and maxent. Through the discussion of this paper, people have a clearer concept of machine learning and understand its application prospect in animal diseases.


2021 ◽  
Vol 13 (11) ◽  
pp. 2135
Author(s):  
Jesús Balado ◽  
Pedro Arias ◽  
Henrique Lorenzo ◽  
Adrián Meijide-Rodríguez

Mobile Laser Scanning (MLS) systems have proven their usefulness in the rapid and accurate acquisition of the urban environment. From the generated point clouds, street furniture can be extracted and classified without manual intervention. However, this process of acquisition and classification is not error-free, caused mainly by disturbances. This paper analyses the effect of three disturbances (point density variation, ambient noise, and occlusions) on the classification of urban objects in point clouds. From point clouds acquired in real case studies, synthetic disturbances are generated and added. The point density reduction is generated by downsampling in a voxel-wise distribution. The ambient noise is generated as random points within the bounding box of the object, and the occlusion is generated by eliminating points contained in a sphere. Samples with disturbances are classified by a pre-trained Convolutional Neural Network (CNN). The results showed different behaviours for each disturbance: density reduction affected objects depending on the object shape and dimensions, ambient noise depending on the volume of the object, while occlusions depended on their size and location. Finally, the CNN was re-trained with a percentage of synthetic samples with disturbances. An improvement in the performance of 10–40% was reported except for occlusions with a radius larger than 1 m.


2021 ◽  
Author(s):  
Olusegun Peter Awe ◽  
Daniel Adebowale Babatunde ◽  
Sangarapillai Lambotharan ◽  
Basil AsSadhan

AbstractWe address the problem of spectrum sensing in decentralized cognitive radio networks using a parametric machine learning method. In particular, to mitigate sensing performance degradation due to the mobility of the secondary users (SUs) in the presence of scatterers, we propose and investigate a classifier that uses a pilot based second order Kalman filter tracker for estimating the slowly varying channel gain between the primary user (PU) transmitter and the mobile SUs. Using the energy measurements at SU terminals as feature vectors, the algorithm is initialized by a K-means clustering algorithm with two centroids corresponding to the active and inactive status of PU transmitter. Under mobility, the centroid corresponding to the active PU status is adapted according to the estimates of the channels given by the Kalman filter and an adaptive K-means clustering technique is used to make classification decisions on the PU activity. Furthermore, to address the possibility that the SU receiver might experience location dependent co-channel interference, we have proposed a quadratic polynomial regression algorithm for estimating the noise plus interference power in the presence of mobility which can be used for adapting the centroid corresponding to inactive PU status. Simulation results demonstrate the efficacy of the proposed algorithm.


2021 ◽  
Vol 13 (9) ◽  
pp. 1859
Author(s):  
Xiangyang Liu ◽  
Yaxiong Wang ◽  
Feng Kang ◽  
Yang Yue ◽  
Yongjun Zheng

The characteristic parameters of Citrus grandis var. Longanyou canopies are important when measuring yield and spraying pesticides. However, the feasibility of the canopy reconstruction method based on point clouds has not been confirmed with these canopies. Therefore, LiDAR point cloud data for C. grandis var. Longanyou were obtained to facilitate the management of groves of this species. Then, a cloth simulation filter and European clustering algorithm were used to realize individual canopy extraction. After calculating canopy height and width, canopy reconstruction and volume calculation were realized using six approaches: by a manual method and using five algorithms based on point clouds (convex hull, CH; convex hull by slices; voxel-based, VB; alpha-shape, AS; alpha-shape by slices, ASBS). ASBS is an innovative algorithm that combines AS with slices optimization, and can best approximate the actual canopy shape. Moreover, the CH algorithm had the shortest run time, and the R2 values of VCH, VVB, VAS, and VASBS algorithms were above 0.87. The volume with the highest accuracy was obtained from the ASBS algorithm, and the CH algorithm had the shortest computation time. In addition, a theoretical but preliminarily system suitable for the calculation of the canopy volume of C. grandis var. Longanyou was developed, which provides a theoretical reference for the efficient and accurate realization of future functional modules such as accurate plant protection, orchard obstacle avoidance, and biomass estimation.


2021 ◽  
Vol 13 (9) ◽  
pp. 4648
Author(s):  
Rana Muhammad Adnan ◽  
Kulwinder Singh Parmar ◽  
Salim Heddam ◽  
Shamsuddin Shahid ◽  
Ozgur Kisi

The accurate estimation of suspended sediments (SSs) carries significance in determining the volume of dam storage, river carrying capacity, pollution susceptibility, soil erosion potential, aquatic ecological impacts, and the design and operation of hydraulic structures. The presented study proposes a new method for accurately estimating daily SSs using antecedent discharge and sediment information. The novel method is developed by hybridizing the multivariate adaptive regression spline (MARS) and the Kmeans clustering algorithm (MARS–KM). The proposed method’s efficacy is established by comparing its performance with the adaptive neuro-fuzzy system (ANFIS), MARS, and M5 tree (M5Tree) models in predicting SSs at two stations situated on the Yangtze River of China, according to the three assessment measurements, RMSE, MAE, and NSE. Two modeling scenarios are employed; data are divided into 50–50% for model training and testing in the first scenario, and the training and test data sets are swapped in the second scenario. In Guangyuan Station, the MARS–KM showed a performance improvement compared to ANFIS, MARS, and M5Tree methods in term of RMSE by 39%, 30%, and 18% in the first scenario and by 24%, 22%, and 8% in the second scenario, respectively, while the improvement in RMSE of ANFIS, MARS, and M5Tree was 34%, 26%, and 27% in the first scenario and 7%, 16%, and 6% in the second scenario, respectively, at Beibei Station. Additionally, the MARS–KM models provided much more satisfactory estimates using only discharge values as inputs.


2019 ◽  
Vol 11 (24) ◽  
pp. 2893 ◽  
Author(s):  
Yi-Chun Lin ◽  
Yi-Ting Cheng ◽  
Tian Zhou ◽  
Radhika Ravi ◽  
Seyyed Hasheminasab ◽  
...  

Unmanned Aerial Vehicle (UAV)-based remote sensing techniques have demonstrated great potential for monitoring rapid shoreline changes. With image-based approaches utilizing Structure from Motion (SfM), high-resolution Digital Surface Models (DSM), and orthophotos can be generated efficiently using UAV imagery. However, image-based mapping yields relatively poor results in low textured areas as compared to those from LiDAR. This study demonstrates the applicability of UAV LiDAR for mapping coastal environments. A custom-built UAV-based mobile mapping system is used to simultaneously collect LiDAR and imagery data. The quality of LiDAR, as well as image-based point clouds, are investigated and compared over different geomorphic environments in terms of their point density, relative and absolute accuracy, and area coverage. The results suggest that both UAV LiDAR and image-based techniques provide high-resolution and high-quality topographic data, and the point clouds generated by both techniques are compatible within a 5 to 10 cm range. UAV LiDAR has a clear advantage in terms of large and uniform ground coverage over different geomorphic environments, higher point density, and ability to penetrate through vegetation to capture points below the canopy. Furthermore, UAV LiDAR-based data acquisitions are assessed for their applicability in monitoring shoreline changes over two actively eroding sandy beaches along southern Lake Michigan, Dune Acres, and Beverly Shores, through repeated field surveys. The results indicate a considerable volume loss and ridge point retreat over an extended period of one year (May 2018 to May 2019) as well as a short storm-induced period of one month (November 2018 to December 2018). The foredune ridge recession ranges from 0 m to 9 m. The average volume loss at Dune Acres is 18.2 cubic meters per meter and 12.2 cubic meters per meter within the one-year period and storm-induced period, respectively, highlighting the importance of episodic events in coastline changes. The average volume loss at Beverly Shores is 2.8 cubic meters per meter and 2.6 cubic meters per meter within the survey period and storm-induced period, respectively.


2015 ◽  
Vol 32 (6) ◽  
pp. 821-827 ◽  
Author(s):  
Enrique Audain ◽  
Yassel Ramos ◽  
Henning Hermjakob ◽  
Darren R. Flower ◽  
Yasset Perez-Riverol

Abstract Motivation: In any macromolecular polyprotic system—for example protein, DNA or RNA—the isoelectric point—commonly referred to as the pI—can be defined as the point of singularity in a titration curve, corresponding to the solution pH value at which the net overall surface charge—and thus the electrophoretic mobility—of the ampholyte sums to zero. Different modern analytical biochemistry and proteomics methods depend on the isoelectric point as a principal feature for protein and peptide characterization. Protein separation by isoelectric point is a critical part of 2-D gel electrophoresis, a key precursor of proteomics, where discrete spots can be digested in-gel, and proteins subsequently identified by analytical mass spectrometry. Peptide fractionation according to their pI is also widely used in current proteomics sample preparation procedures previous to the LC-MS/MS analysis. Therefore accurate theoretical prediction of pI would expedite such analysis. While such pI calculation is widely used, it remains largely untested, motivating our efforts to benchmark pI prediction methods. Results: Using data from the database PIP-DB and one publically available dataset as our reference gold standard, we have undertaken the benchmarking of pI calculation methods. We find that methods vary in their accuracy and are highly sensitive to the choice of basis set. The machine-learning algorithms, especially the SVM-based algorithm, showed a superior performance when studying peptide mixtures. In general, learning-based pI prediction methods (such as Cofactor, SVM and Branca) require a large training dataset and their resulting performance will strongly depend of the quality of that data. In contrast with Iterative methods, machine-learning algorithms have the advantage of being able to add new features to improve the accuracy of prediction. Contact: [email protected] Availability and Implementation: The software and data are freely available at https://github.com/ypriverol/pIR. Supplementary information: Supplementary data are available at Bioinformatics online.


2020 ◽  
Author(s):  
Xiao Lai ◽  
Pu Tian

AbstractSupervised machine learning, especially deep learning based on a wide variety of neural network architectures, have contributed tremendously to fields such as marketing, computer vision and natural language processing. However, development of un-supervised machine learning algorithms has been a bottleneck of artificial intelligence. Clustering is a fundamental unsupervised task in many different subjects. Unfortunately, no present algorithm is satisfactory for clustering of high dimensional data with strong nonlinear correlations. In this work, we propose a simple and highly efficient hierarchical clustering algorithm based on encoding by composition rank vectors and tree structure, and demonstrate its utility with clustering of protein structural domains. No record comparison, which is an expensive and essential common step to all present clustering algorithms, is involved. Consequently, it achieves linear time and space computational complexity hierarchical clustering, thus applicable to arbitrarily large datasets. The key factor in this algorithm is definition of composition, which is dependent upon physical nature of target data and therefore need to be constructed case by case. Nonetheless, the algorithm is general and applicable to any high dimensional data with strong nonlinear correlations. We hope this algorithm to inspire a rich research field of encoding based clustering well beyond composition rank vector trees.


2021 ◽  
Vol 6 (11) ◽  
pp. 157
Author(s):  
Gonçalo Pereira ◽  
Manuel Parente ◽  
João Moutinho ◽  
Manuel Sampaio

Decision support and optimization tools to be used in construction often require an accurate estimation of the cost variables to maximize their benefit. Heavy machinery is traditionally one of the greatest costs to consider mainly due to fuel consumption. These typically diesel-powered machines have a great variability of fuel consumption depending on the scenario of utilization. This paper describes the creation of a framework aiming to estimate the fuel consumption of construction trucks depending on the carried load, the slope, the distance, and the pavement type. Having a more accurate estimation will increase the benefit of these optimization tools. The fuel consumption estimation model was developed using Machine Learning (ML) algorithms supported by data, which were gathered through several sensors, in a specially designed datalogger with wireless communication and opportunistic synchronization, in a real context experiment. The results demonstrated the viability of the method, providing important insight into the advantages associated with the combination of sensorization and the machine learning models in a real-world construction setting. Ultimately, this study comprises a significant step towards the achievement of IoT implementation from a Construction 4.0 viewpoint, especially when considering its potential for real-time and digital twins applications.


Sign in / Sign up

Export Citation Format

Share Document